From 6133fb1aa137b35a8fa91ec17977ebf6a41456ec Mon Sep 17 00:00:00 2001 From: "Denis V. Lunev" Date: Thu, 28 Feb 2008 20:46:17 -0800 Subject: [NETNS]: Disable inetaddr notifiers in namespaces other than initial. ip_fib_init is kept enabled. It is already namespace-aware. Signed-off-by: Denis V. Lunev Signed-off-by: David S. Miller --- drivers/s390/net/qeth_main.c | 3 +++ 1 file changed, 3 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_main.c b/drivers/s390/net/qeth_main.c index 62606ce26e55..d063e9ecf804 100644 --- a/drivers/s390/net/qeth_main.c +++ b/drivers/s390/net/qeth_main.c @@ -8622,6 +8622,9 @@ qeth_ip_event(struct notifier_block *this, struct qeth_ipaddr *addr; struct qeth_card *card; + if (dev->nd_net != &init_net) + return NOTIFY_DONE; + QETH_DBF_TEXT(trace,3,"ipevent"); card = qeth_get_card_from_dev(dev); if (!card) -- cgit v1.2.3-59-g8ed1b From f423f73506ba8e837b5fdcd8c8be50078deb123d Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Fri, 8 Feb 2008 00:03:48 +0100 Subject: drivers/s390/net: Kconfig brush up adapt drivers/s390/net/Kconfig to current IBM wording and further cosmetics Signed-off-by: Peter Tiedemann Signed-off-by: Ursula Braun Signed-off-by: Jeff Garzik --- drivers/s390/net/Kconfig | 43 ++++++++++++++++++++++--------------------- 1 file changed, 22 insertions(+), 21 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/Kconfig b/drivers/s390/net/Kconfig index eada69dec4fe..9ef029e9c838 100644 --- a/drivers/s390/net/Kconfig +++ b/drivers/s390/net/Kconfig @@ -5,22 +5,23 @@ config LCS tristate "Lan Channel Station Interface" depends on CCW && NETDEVICES && (NET_ETHERNET || TR || FDDI) help - Select this option if you want to use LCS networking on IBM S/390 - or zSeries. This device driver supports Token Ring (IEEE 802.5), - FDDI (IEEE 802.7) and Ethernet. - This option is also available as a module which will be - called lcs.ko. If you do not know what it is, it's safe to say "Y". + Select this option if you want to use LCS networking on IBM System z. + This device driver supports Token Ring (IEEE 802.5), + FDDI (IEEE 802.7) and Ethernet. + To compile as a module, choose M. The module name is lcs.ko. + If you do not know what it is, it's safe to choose Y. config CTC tristate "CTC device support" depends on CCW && NETDEVICES help - Select this option if you want to use channel-to-channel networking - on IBM S/390 or zSeries. This device driver supports real CTC - coupling using ESCON. It also supports virtual CTCs when running - under VM. It will use the channel device configuration if this is - available. This option is also available as a module which will be - called ctc.ko. If you do not know what it is, it's safe to say "Y". + Select this option if you want to use channel-to-channel + point-to-point networking on IBM System z. + This device driver supports real CTC coupling using ESCON. + It also supports virtual CTCs when running under VM. + To compile as a module, choose M. The module name is ctc.ko. + To compile into the kernel, choose Y. + If you do not need any channel-to-channel connection, choose N. config NETIUCV tristate "IUCV network device support (VM only)" @@ -29,9 +30,9 @@ config NETIUCV Select this option if you want to use inter-user communication vehicle networking under VM or VIF. It enables a fast communication link between VM guests. Using ifconfig a point-to-point connection - can be established to the Linux for zSeries and S7390 system - running on the other VM guest. This option is also available - as a module which will be called netiucv.ko. If unsure, say "Y". + can be established to the Linux on IBM System z + running on the other VM guest. To compile as a module, choose M. + The module name is netiucv.ko. If unsure, choose Y. config SMSGIUCV tristate "IUCV special message support (VM only)" @@ -47,22 +48,22 @@ config CLAW This driver supports channel attached CLAW devices. CLAW is Common Link Access for Workstation. Common devices that use CLAW are RS/6000s, Cisco Routers (CIP) and 3172 devices. - To compile as a module choose M here: The module will be called - claw.ko to compile into the kernel choose Y + To compile as a module, choose M. The module name is claw.ko. + To compile into the kernel, choose Y. config QETH tristate "Gigabit Ethernet device support" depends on CCW && NETDEVICES && IP_MULTICAST && QDIO help - This driver supports the IBM S/390 and zSeries OSA Express adapters + This driver supports the IBM System z OSA Express adapters in QDIO mode (all media types), HiperSockets interfaces and VM GuestLAN interfaces in QDIO and HIPER mode. - For details please refer to the documentation provided by IBM at - + For details please refer to the documentation provided by IBM at + - To compile this driver as a module, choose M here: the - module will be called qeth.ko. + To compile this driver as a module, choose M. + The module name is qeth.ko. comment "Gigabit Ethernet default settings" -- cgit v1.2.3-59-g8ed1b From 293d984f0e3604c04dcdbf00117ddc1e5d4b1909 Mon Sep 17 00:00:00 2001 From: Peter Tiedemann Date: Fri, 8 Feb 2008 00:03:49 +0100 Subject: ctcm: infrastructure for replaced ctc driver ctcm driver supports the channel-to-channel connections of the old ctc driver plus an additional MPC protocol to provide SNA connectivity. This new ctcm driver replaces the existing ctc driver. Signed-off-by: Peter Tiedemann Signed-off-by: Ursula Braun Signed-off-by: Jeff Garzik --- drivers/s390/net/Kconfig | 12 +- drivers/s390/net/Makefile | 5 +- drivers/s390/net/ctcm_dbug.c | 67 ++ drivers/s390/net/ctcm_dbug.h | 158 +++ drivers/s390/net/ctcm_fsms.c | 2347 ++++++++++++++++++++++++++++++++++++++ drivers/s390/net/ctcm_fsms.h | 359 ++++++ drivers/s390/net/ctcm_main.c | 1772 +++++++++++++++++++++++++++++ drivers/s390/net/ctcm_main.h | 287 +++++ drivers/s390/net/ctcm_mpc.c | 2472 +++++++++++++++++++++++++++++++++++++++++ drivers/s390/net/ctcm_mpc.h | 239 ++++ drivers/s390/net/ctcm_sysfs.c | 210 ++++ 11 files changed, 7920 insertions(+), 8 deletions(-) create mode 100644 drivers/s390/net/ctcm_dbug.c create mode 100644 drivers/s390/net/ctcm_dbug.h create mode 100644 drivers/s390/net/ctcm_fsms.c create mode 100644 drivers/s390/net/ctcm_fsms.h create mode 100644 drivers/s390/net/ctcm_main.c create mode 100644 drivers/s390/net/ctcm_main.h create mode 100644 drivers/s390/net/ctcm_mpc.c create mode 100644 drivers/s390/net/ctcm_mpc.h create mode 100644 drivers/s390/net/ctcm_sysfs.c (limited to 'drivers/s390') diff --git a/drivers/s390/net/Kconfig b/drivers/s390/net/Kconfig index 9ef029e9c838..773f5a6d5822 100644 --- a/drivers/s390/net/Kconfig +++ b/drivers/s390/net/Kconfig @@ -11,15 +11,17 @@ config LCS To compile as a module, choose M. The module name is lcs.ko. If you do not know what it is, it's safe to choose Y. -config CTC - tristate "CTC device support" +config CTCM + tristate "CTC and MPC SNA device support" depends on CCW && NETDEVICES help Select this option if you want to use channel-to-channel point-to-point networking on IBM System z. This device driver supports real CTC coupling using ESCON. It also supports virtual CTCs when running under VM. - To compile as a module, choose M. The module name is ctc.ko. + This driver also supports channel-to-channel MPC SNA devices. + MPC is an SNA protocol device used by Communication Server for Linux. + To compile as a module, choose M. The module name is ctcm.ko. To compile into the kernel, choose Y. If you do not need any channel-to-channel connection, choose N. @@ -84,7 +86,7 @@ config QETH_VLAN 802.1q VLAN support in the qeth device driver. config CCWGROUP - tristate - default (LCS || CTC || QETH) + tristate + default (LCS || CTCM || QETH) endmenu diff --git a/drivers/s390/net/Makefile b/drivers/s390/net/Makefile index bbe3ab2e93d9..f6d189a8a451 100644 --- a/drivers/s390/net/Makefile +++ b/drivers/s390/net/Makefile @@ -2,11 +2,10 @@ # S/390 network devices # -ctc-objs := ctcmain.o ctcdbug.o - +ctcm-y += ctcm_main.o ctcm_fsms.o ctcm_mpc.o ctcm_sysfs.o ctcm_dbug.o +obj-$(CONFIG_CTCM) += ctcm.o fsm.o cu3088.o obj-$(CONFIG_NETIUCV) += netiucv.o fsm.o obj-$(CONFIG_SMSGIUCV) += smsgiucv.o -obj-$(CONFIG_CTC) += ctc.o fsm.o cu3088.o obj-$(CONFIG_LCS) += lcs.o cu3088.o obj-$(CONFIG_CLAW) += claw.o cu3088.o qeth-y := qeth_main.o qeth_mpc.o qeth_sys.o qeth_eddp.o diff --git a/drivers/s390/net/ctcm_dbug.c b/drivers/s390/net/ctcm_dbug.c new file mode 100644 index 000000000000..8eb25d00b2e7 --- /dev/null +++ b/drivers/s390/net/ctcm_dbug.c @@ -0,0 +1,67 @@ +/* + * drivers/s390/net/ctcm_dbug.c + * + * Copyright IBM Corp. 2001, 2007 + * Authors: Peter Tiedemann (ptiedem@de.ibm.com) + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "ctcm_dbug.h" + +/* + * Debug Facility Stuff + */ + +DEFINE_PER_CPU(char[256], ctcm_dbf_txt_buf); + +struct ctcm_dbf_info ctcm_dbf[CTCM_DBF_INFOS] = { + [CTCM_DBF_SETUP] = {"ctc_setup", 8, 1, 64, 5, NULL}, + [CTCM_DBF_ERROR] = {"ctc_error", 8, 1, 64, 3, NULL}, + [CTCM_DBF_TRACE] = {"ctc_trace", 8, 1, 64, 3, NULL}, + [CTCM_DBF_MPC_SETUP] = {"mpc_setup", 8, 1, 64, 5, NULL}, + [CTCM_DBF_MPC_ERROR] = {"mpc_error", 8, 1, 64, 3, NULL}, + [CTCM_DBF_MPC_TRACE] = {"mpc_trace", 8, 1, 64, 3, NULL}, +}; + +void ctcm_unregister_dbf_views(void) +{ + int x; + for (x = 0; x < CTCM_DBF_INFOS; x++) { + debug_unregister(ctcm_dbf[x].id); + ctcm_dbf[x].id = NULL; + } +} + +int ctcm_register_dbf_views(void) +{ + int x; + for (x = 0; x < CTCM_DBF_INFOS; x++) { + /* register the areas */ + ctcm_dbf[x].id = debug_register(ctcm_dbf[x].name, + ctcm_dbf[x].pages, + ctcm_dbf[x].areas, + ctcm_dbf[x].len); + if (ctcm_dbf[x].id == NULL) { + ctcm_unregister_dbf_views(); + return -ENOMEM; + } + + /* register a view */ + debug_register_view(ctcm_dbf[x].id, &debug_hex_ascii_view); + /* set a passing level */ + debug_set_level(ctcm_dbf[x].id, ctcm_dbf[x].level); + } + + return 0; +} + diff --git a/drivers/s390/net/ctcm_dbug.h b/drivers/s390/net/ctcm_dbug.h new file mode 100644 index 000000000000..fdff34fe59a2 --- /dev/null +++ b/drivers/s390/net/ctcm_dbug.h @@ -0,0 +1,158 @@ +/* + * drivers/s390/net/ctcm_dbug.h + * + * Copyright IBM Corp. 2001, 2007 + * Authors: Peter Tiedemann (ptiedem@de.ibm.com) + * + */ + +#ifndef _CTCM_DBUG_H_ +#define _CTCM_DBUG_H_ + +/* + * Debug Facility stuff + */ + +#include + +#ifdef DEBUG + #define do_debug 1 +#else + #define do_debug 0 +#endif +#ifdef DEBUGDATA + #define do_debug_data 1 +#else + #define do_debug_data 0 +#endif +#ifdef DEBUGCCW + #define do_debug_ccw 1 +#else + #define do_debug_ccw 0 +#endif + +/* define dbf debug levels similar to kernel msg levels */ +#define CTC_DBF_ALWAYS 0 /* always print this */ +#define CTC_DBF_EMERG 0 /* system is unusable */ +#define CTC_DBF_ALERT 1 /* action must be taken immediately */ +#define CTC_DBF_CRIT 2 /* critical conditions */ +#define CTC_DBF_ERROR 3 /* error conditions */ +#define CTC_DBF_WARN 4 /* warning conditions */ +#define CTC_DBF_NOTICE 5 /* normal but significant condition */ +#define CTC_DBF_INFO 5 /* informational */ +#define CTC_DBF_DEBUG 6 /* debug-level messages */ + +DECLARE_PER_CPU(char[256], ctcm_dbf_txt_buf); + +enum ctcm_dbf_names { + CTCM_DBF_SETUP, + CTCM_DBF_ERROR, + CTCM_DBF_TRACE, + CTCM_DBF_MPC_SETUP, + CTCM_DBF_MPC_ERROR, + CTCM_DBF_MPC_TRACE, + CTCM_DBF_INFOS /* must be last element */ +}; + +struct ctcm_dbf_info { + char name[DEBUG_MAX_NAME_LEN]; + int pages; + int areas; + int len; + int level; + debug_info_t *id; +}; + +extern struct ctcm_dbf_info ctcm_dbf[CTCM_DBF_INFOS]; + +int ctcm_register_dbf_views(void); +void ctcm_unregister_dbf_views(void); + +static inline const char *strtail(const char *s, int n) +{ + int l = strlen(s); + return (l > n) ? s + (l - n) : s; +} + +/* sort out levels early to avoid unnecessary sprintfs */ +static inline int ctcm_dbf_passes(debug_info_t *dbf_grp, int level) +{ + return (dbf_grp->level >= level); +} + +#define CTCM_FUNTAIL strtail((char *)__func__, 16) + +#define CTCM_DBF_TEXT(name, level, text) \ + do { \ + debug_text_event(ctcm_dbf[CTCM_DBF_##name].id, level, text); \ + } while (0) + +#define CTCM_DBF_HEX(name, level, addr, len) \ + do { \ + debug_event(ctcm_dbf[CTCM_DBF_##name].id, \ + level, (void *)(addr), len); \ + } while (0) + +#define CTCM_DBF_TEXT_(name, level, text...) \ + do { \ + if (ctcm_dbf_passes(ctcm_dbf[CTCM_DBF_##name].id, level)) { \ + char *ctcm_dbf_txt_buf = \ + get_cpu_var(ctcm_dbf_txt_buf); \ + sprintf(ctcm_dbf_txt_buf, text); \ + debug_text_event(ctcm_dbf[CTCM_DBF_##name].id, \ + level, ctcm_dbf_txt_buf); \ + put_cpu_var(ctcm_dbf_txt_buf); \ + } \ + } while (0) + +/* + * cat : one of {setup, mpc_setup, trace, mpc_trace, error, mpc_error}. + * dev : netdevice with valid name field. + * text: any text string. + */ +#define CTCM_DBF_DEV_NAME(cat, dev, text) \ + do { \ + CTCM_DBF_TEXT_(cat, CTC_DBF_INFO, "%s(%s) : %s", \ + CTCM_FUNTAIL, dev->name, text); \ + } while (0) + +#define MPC_DBF_DEV_NAME(cat, dev, text) \ + do { \ + CTCM_DBF_TEXT_(MPC_##cat, CTC_DBF_INFO, "%s(%s) : %s", \ + CTCM_FUNTAIL, dev->name, text); \ + } while (0) + +#define CTCMY_DBF_DEV_NAME(cat, dev, text) \ + do { \ + if (IS_MPCDEV(dev)) \ + MPC_DBF_DEV_NAME(cat, dev, text); \ + else \ + CTCM_DBF_DEV_NAME(cat, dev, text); \ + } while (0) + +/* + * cat : one of {setup, mpc_setup, trace, mpc_trace, error, mpc_error}. + * dev : netdevice. + * text: any text string. + */ +#define CTCM_DBF_DEV(cat, dev, text) \ + do { \ + CTCM_DBF_TEXT_(cat, CTC_DBF_INFO, "%s(%p) : %s", \ + CTCM_FUNTAIL, dev, text); \ + } while (0) + +#define MPC_DBF_DEV(cat, dev, text) \ + do { \ + CTCM_DBF_TEXT_(MPC_##cat, CTC_DBF_INFO, "%s(%p) : %s", \ + CTCM_FUNTAIL, dev, text); \ + } while (0) + +#define CTCMY_DBF_DEV(cat, dev, text) \ + do { \ + if (IS_MPCDEV(dev)) \ + MPC_DBF_DEV(cat, dev, text); \ + else \ + CTCM_DBF_DEV(cat, dev, text); \ + } while (0) + +#endif diff --git a/drivers/s390/net/ctcm_fsms.c b/drivers/s390/net/ctcm_fsms.c new file mode 100644 index 000000000000..2a106f3a076d --- /dev/null +++ b/drivers/s390/net/ctcm_fsms.c @@ -0,0 +1,2347 @@ +/* + * drivers/s390/net/ctcm_fsms.c + * + * Copyright IBM Corp. 2001, 2007 + * Authors: Fritz Elfert (felfert@millenux.com) + * Peter Tiedemann (ptiedem@de.ibm.com) + * MPC additions : + * Belinda Thompson (belindat@us.ibm.com) + * Andy Richter (richtera@us.ibm.com) + */ + +#undef DEBUG +#undef DEBUGDATA +#undef DEBUGCCW + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include + +#include "fsm.h" +#include "cu3088.h" + +#include "ctcm_dbug.h" +#include "ctcm_main.h" +#include "ctcm_fsms.h" + +const char *dev_state_names[] = { + [DEV_STATE_STOPPED] = "Stopped", + [DEV_STATE_STARTWAIT_RXTX] = "StartWait RXTX", + [DEV_STATE_STARTWAIT_RX] = "StartWait RX", + [DEV_STATE_STARTWAIT_TX] = "StartWait TX", + [DEV_STATE_STOPWAIT_RXTX] = "StopWait RXTX", + [DEV_STATE_STOPWAIT_RX] = "StopWait RX", + [DEV_STATE_STOPWAIT_TX] = "StopWait TX", + [DEV_STATE_RUNNING] = "Running", +}; + +const char *dev_event_names[] = { + [DEV_EVENT_START] = "Start", + [DEV_EVENT_STOP] = "Stop", + [DEV_EVENT_RXUP] = "RX up", + [DEV_EVENT_TXUP] = "TX up", + [DEV_EVENT_RXDOWN] = "RX down", + [DEV_EVENT_TXDOWN] = "TX down", + [DEV_EVENT_RESTART] = "Restart", +}; + +const char *ctc_ch_event_names[] = { + [CTC_EVENT_IO_SUCCESS] = "ccw_device success", + [CTC_EVENT_IO_EBUSY] = "ccw_device busy", + [CTC_EVENT_IO_ENODEV] = "ccw_device enodev", + [CTC_EVENT_IO_UNKNOWN] = "ccw_device unknown", + [CTC_EVENT_ATTNBUSY] = "Status ATTN & BUSY", + [CTC_EVENT_ATTN] = "Status ATTN", + [CTC_EVENT_BUSY] = "Status BUSY", + [CTC_EVENT_UC_RCRESET] = "Unit check remote reset", + [CTC_EVENT_UC_RSRESET] = "Unit check remote system reset", + [CTC_EVENT_UC_TXTIMEOUT] = "Unit check TX timeout", + [CTC_EVENT_UC_TXPARITY] = "Unit check TX parity", + [CTC_EVENT_UC_HWFAIL] = "Unit check Hardware failure", + [CTC_EVENT_UC_RXPARITY] = "Unit check RX parity", + [CTC_EVENT_UC_ZERO] = "Unit check ZERO", + [CTC_EVENT_UC_UNKNOWN] = "Unit check Unknown", + [CTC_EVENT_SC_UNKNOWN] = "SubChannel check Unknown", + [CTC_EVENT_MC_FAIL] = "Machine check failure", + [CTC_EVENT_MC_GOOD] = "Machine check operational", + [CTC_EVENT_IRQ] = "IRQ normal", + [CTC_EVENT_FINSTAT] = "IRQ final", + [CTC_EVENT_TIMER] = "Timer", + [CTC_EVENT_START] = "Start", + [CTC_EVENT_STOP] = "Stop", + /* + * additional MPC events + */ + [CTC_EVENT_SEND_XID] = "XID Exchange", + [CTC_EVENT_RSWEEP_TIMER] = "MPC Group Sweep Timer", +}; + +const char *ctc_ch_state_names[] = { + [CTC_STATE_IDLE] = "Idle", + [CTC_STATE_STOPPED] = "Stopped", + [CTC_STATE_STARTWAIT] = "StartWait", + [CTC_STATE_STARTRETRY] = "StartRetry", + [CTC_STATE_SETUPWAIT] = "SetupWait", + [CTC_STATE_RXINIT] = "RX init", + [CTC_STATE_TXINIT] = "TX init", + [CTC_STATE_RX] = "RX", + [CTC_STATE_TX] = "TX", + [CTC_STATE_RXIDLE] = "RX idle", + [CTC_STATE_TXIDLE] = "TX idle", + [CTC_STATE_RXERR] = "RX error", + [CTC_STATE_TXERR] = "TX error", + [CTC_STATE_TERM] = "Terminating", + [CTC_STATE_DTERM] = "Restarting", + [CTC_STATE_NOTOP] = "Not operational", + /* + * additional MPC states + */ + [CH_XID0_PENDING] = "Pending XID0 Start", + [CH_XID0_INPROGRESS] = "In XID0 Negotiations ", + [CH_XID7_PENDING] = "Pending XID7 P1 Start", + [CH_XID7_PENDING1] = "Active XID7 P1 Exchange ", + [CH_XID7_PENDING2] = "Pending XID7 P2 Start ", + [CH_XID7_PENDING3] = "Active XID7 P2 Exchange ", + [CH_XID7_PENDING4] = "XID7 Complete - Pending READY ", +}; + +static void ctcm_action_nop(fsm_instance *fi, int event, void *arg); + +/* + * ----- static ctcm actions for channel statemachine ----- + * +*/ +static void chx_txdone(fsm_instance *fi, int event, void *arg); +static void chx_rx(fsm_instance *fi, int event, void *arg); +static void chx_rxidle(fsm_instance *fi, int event, void *arg); +static void chx_firstio(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_setmode(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_start(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_haltio(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_stopped(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_stop(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_fail(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_setuperr(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_restart(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_rxiniterr(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_rxinitfail(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_rxdisc(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_txiniterr(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_txretry(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_iofatal(fsm_instance *fi, int event, void *arg); + +/* + * ----- static ctcmpc actions for ctcmpc channel statemachine ----- + * +*/ +static void ctcmpc_chx_txdone(fsm_instance *fi, int event, void *arg); +static void ctcmpc_chx_rx(fsm_instance *fi, int event, void *arg); +static void ctcmpc_chx_firstio(fsm_instance *fi, int event, void *arg); +/* shared : +static void ctcm_chx_setmode(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_start(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_haltio(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_stopped(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_stop(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_fail(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_setuperr(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_restart(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_rxiniterr(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_rxinitfail(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_rxdisc(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_txiniterr(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_txretry(fsm_instance *fi, int event, void *arg); +static void ctcm_chx_iofatal(fsm_instance *fi, int event, void *arg); +*/ +static void ctcmpc_chx_attn(fsm_instance *fsm, int event, void *arg); +static void ctcmpc_chx_attnbusy(fsm_instance *, int, void *); +static void ctcmpc_chx_resend(fsm_instance *, int, void *); +static void ctcmpc_chx_send_sweep(fsm_instance *fsm, int event, void *arg); + +/** + * Check return code of a preceeding ccw_device call, halt_IO etc... + * + * ch : The channel, the error belongs to. + * Returns the error code (!= 0) to inspect. + */ +void ctcm_ccw_check_rc(struct channel *ch, int rc, char *msg) +{ + CTCM_DBF_TEXT_(ERROR, CTC_DBF_ERROR, + "ccw error %s (%s): %04x\n", ch->id, msg, rc); + switch (rc) { + case -EBUSY: + ctcm_pr_warn("%s (%s): Busy !\n", ch->id, msg); + fsm_event(ch->fsm, CTC_EVENT_IO_EBUSY, ch); + break; + case -ENODEV: + ctcm_pr_emerg("%s (%s): Invalid device called for IO\n", + ch->id, msg); + fsm_event(ch->fsm, CTC_EVENT_IO_ENODEV, ch); + break; + default: + ctcm_pr_emerg("%s (%s): Unknown error in do_IO %04x\n", + ch->id, msg, rc); + fsm_event(ch->fsm, CTC_EVENT_IO_UNKNOWN, ch); + } +} + +void ctcm_purge_skb_queue(struct sk_buff_head *q) +{ + struct sk_buff *skb; + + CTCM_DBF_TEXT(TRACE, 3, __FUNCTION__); + + while ((skb = skb_dequeue(q))) { + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + } +} + +/** + * NOP action for statemachines + */ +static void ctcm_action_nop(fsm_instance *fi, int event, void *arg) +{ +} + +/* + * Actions for channel - statemachines. + */ + +/** + * Normal data has been send. Free the corresponding + * skb (it's in io_queue), reset dev->tbusy and + * revert to idle state. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void chx_txdone(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct sk_buff *skb; + int first = 1; + int i; + unsigned long duration; + struct timespec done_stamp = current_kernel_time(); /* xtime */ + + duration = + (done_stamp.tv_sec - ch->prof.send_stamp.tv_sec) * 1000000 + + (done_stamp.tv_nsec - ch->prof.send_stamp.tv_nsec) / 1000; + if (duration > ch->prof.tx_time) + ch->prof.tx_time = duration; + + if (ch->irb->scsw.count != 0) + ctcm_pr_debug("%s: TX not complete, remaining %d bytes\n", + dev->name, ch->irb->scsw.count); + fsm_deltimer(&ch->timer); + while ((skb = skb_dequeue(&ch->io_queue))) { + priv->stats.tx_packets++; + priv->stats.tx_bytes += skb->len - LL_HEADER_LENGTH; + if (first) { + priv->stats.tx_bytes += 2; + first = 0; + } + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + } + spin_lock(&ch->collect_lock); + clear_normalized_cda(&ch->ccw[4]); + if (ch->collect_len > 0) { + int rc; + + if (ctcm_checkalloc_buffer(ch)) { + spin_unlock(&ch->collect_lock); + return; + } + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + if (ch->prof.maxmulti < (ch->collect_len + 2)) + ch->prof.maxmulti = ch->collect_len + 2; + if (ch->prof.maxcqueue < skb_queue_len(&ch->collect_queue)) + ch->prof.maxcqueue = skb_queue_len(&ch->collect_queue); + *((__u16 *)skb_put(ch->trans_skb, 2)) = ch->collect_len + 2; + i = 0; + while ((skb = skb_dequeue(&ch->collect_queue))) { + skb_copy_from_linear_data(skb, + skb_put(ch->trans_skb, skb->len), skb->len); + priv->stats.tx_packets++; + priv->stats.tx_bytes += skb->len - LL_HEADER_LENGTH; + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + i++; + } + ch->collect_len = 0; + spin_unlock(&ch->collect_lock); + ch->ccw[1].count = ch->trans_skb->len; + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + ch->prof.send_stamp = current_kernel_time(); /* xtime */ + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + ch->prof.doios_multi++; + if (rc != 0) { + priv->stats.tx_dropped += i; + priv->stats.tx_errors += i; + fsm_deltimer(&ch->timer); + ctcm_ccw_check_rc(ch, rc, "chained TX"); + } + } else { + spin_unlock(&ch->collect_lock); + fsm_newstate(fi, CTC_STATE_TXIDLE); + } + ctcm_clear_busy_do(dev); +} + +/** + * Initial data is sent. + * Notify device statemachine that we are up and + * running. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +void ctcm_chx_txidle(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + CTCM_DBF_TEXT(TRACE, 6, __FUNCTION__); + fsm_deltimer(&ch->timer); + fsm_newstate(fi, CTC_STATE_TXIDLE); + fsm_event(priv->fsm, DEV_EVENT_TXUP, ch->netdev); +} + +/** + * Got normal data, check for sanity, queue it up, allocate new buffer + * trigger bottom half, and initiate next read. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void chx_rx(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + int len = ch->max_bufsize - ch->irb->scsw.count; + struct sk_buff *skb = ch->trans_skb; + __u16 block_len = *((__u16 *)skb->data); + int check_len; + int rc; + + fsm_deltimer(&ch->timer); + if (len < 8) { + ctcm_pr_debug("%s: got packet with length %d < 8\n", + dev->name, len); + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + goto again; + } + if (len > ch->max_bufsize) { + ctcm_pr_debug("%s: got packet with length %d > %d\n", + dev->name, len, ch->max_bufsize); + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + goto again; + } + + /* + * VM TCP seems to have a bug sending 2 trailing bytes of garbage. + */ + switch (ch->protocol) { + case CTCM_PROTO_S390: + case CTCM_PROTO_OS390: + check_len = block_len + 2; + break; + default: + check_len = block_len; + break; + } + if ((len < block_len) || (len > check_len)) { + ctcm_pr_debug("%s: got block length %d != rx length %d\n", + dev->name, block_len, len); + if (do_debug) + ctcmpc_dump_skb(skb, 0); + + *((__u16 *)skb->data) = len; + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + goto again; + } + block_len -= 2; + if (block_len > 0) { + *((__u16 *)skb->data) = block_len; + ctcm_unpack_skb(ch, skb); + } + again: + skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(skb); + skb->len = 0; + if (ctcm_checkalloc_buffer(ch)) + return; + ch->ccw[1].count = ch->max_bufsize; + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + if (rc != 0) + ctcm_ccw_check_rc(ch, rc, "normal RX"); +} + +/** + * Initialize connection by sending a __u16 of value 0. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void chx_firstio(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + int rc; + + CTCM_DBF_TEXT(TRACE, 6, __FUNCTION__); + + if (fsm_getstate(fi) == CTC_STATE_TXIDLE) + ctcm_pr_debug("%s: remote side issued READ?, init.\n", ch->id); + fsm_deltimer(&ch->timer); + if (ctcm_checkalloc_buffer(ch)) + return; + if ((fsm_getstate(fi) == CTC_STATE_SETUPWAIT) && + (ch->protocol == CTCM_PROTO_OS390)) { + /* OS/390 resp. z/OS */ + if (CHANNEL_DIRECTION(ch->flags) == READ) { + *((__u16 *)ch->trans_skb->data) = CTCM_INITIAL_BLOCKLEN; + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, + CTC_EVENT_TIMER, ch); + chx_rxidle(fi, event, arg); + } else { + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + fsm_newstate(fi, CTC_STATE_TXIDLE); + fsm_event(priv->fsm, DEV_EVENT_TXUP, dev); + } + return; + } + + /* + * Don't setup a timer for receiving the initial RX frame + * if in compatibility mode, since VM TCP delays the initial + * frame until it has some data to send. + */ + if ((CHANNEL_DIRECTION(ch->flags) == WRITE) || + (ch->protocol != CTCM_PROTO_S390)) + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + + *((__u16 *)ch->trans_skb->data) = CTCM_INITIAL_BLOCKLEN; + ch->ccw[1].count = 2; /* Transfer only length */ + + fsm_newstate(fi, (CHANNEL_DIRECTION(ch->flags) == READ) + ? CTC_STATE_RXINIT : CTC_STATE_TXINIT); + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + if (rc != 0) { + fsm_deltimer(&ch->timer); + fsm_newstate(fi, CTC_STATE_SETUPWAIT); + ctcm_ccw_check_rc(ch, rc, "init IO"); + } + /* + * If in compatibility mode since we don't setup a timer, we + * also signal RX channel up immediately. This enables us + * to send packets early which in turn usually triggers some + * reply from VM TCP which brings up the RX channel to it's + * final state. + */ + if ((CHANNEL_DIRECTION(ch->flags) == READ) && + (ch->protocol == CTCM_PROTO_S390)) { + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + fsm_event(priv->fsm, DEV_EVENT_RXUP, dev); + } +} + +/** + * Got initial data, check it. If OK, + * notify device statemachine that we are up and + * running. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void chx_rxidle(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + __u16 buflen; + int rc; + + CTCM_DBF_TEXT(TRACE, 6, __FUNCTION__); + fsm_deltimer(&ch->timer); + buflen = *((__u16 *)ch->trans_skb->data); + if (do_debug) + ctcm_pr_debug("%s: Initial RX count %d\n", dev->name, buflen); + + if (buflen >= CTCM_INITIAL_BLOCKLEN) { + if (ctcm_checkalloc_buffer(ch)) + return; + ch->ccw[1].count = ch->max_bufsize; + fsm_newstate(fi, CTC_STATE_RXIDLE); + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + if (rc != 0) { + fsm_newstate(fi, CTC_STATE_RXINIT); + ctcm_ccw_check_rc(ch, rc, "initial RX"); + } else + fsm_event(priv->fsm, DEV_EVENT_RXUP, dev); + } else { + if (do_debug) + ctcm_pr_debug("%s: Initial RX count %d not %d\n", + dev->name, buflen, CTCM_INITIAL_BLOCKLEN); + chx_firstio(fi, event, arg); + } +} + +/** + * Set channel into extended mode. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_setmode(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + int rc; + unsigned long saveflags = 0; + int timeout = CTCM_TIME_5_SEC; + + fsm_deltimer(&ch->timer); + if (IS_MPC(ch)) { + timeout = 1500; + if (do_debug) + ctcm_pr_debug("ctcm enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + } + fsm_addtimer(&ch->timer, timeout, CTC_EVENT_TIMER, ch); + fsm_newstate(fi, CTC_STATE_SETUPWAIT); + if (do_debug_ccw && IS_MPC(ch)) + ctcmpc_dumpit((char *)&ch->ccw[6], sizeof(struct ccw1) * 2); + + if (event == CTC_EVENT_TIMER) /* only for timer not yet locked */ + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + /* Such conditional locking is undeterministic in + * static view. => ignore sparse warnings here. */ + + rc = ccw_device_start(ch->cdev, &ch->ccw[6], + (unsigned long)ch, 0xff, 0); + if (event == CTC_EVENT_TIMER) /* see above comments */ + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + if (rc != 0) { + fsm_deltimer(&ch->timer); + fsm_newstate(fi, CTC_STATE_STARTWAIT); + ctcm_ccw_check_rc(ch, rc, "set Mode"); + } else + ch->retry = 0; +} + +/** + * Setup channel. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_start(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + int rc; + struct net_device *dev; + unsigned long saveflags; + + CTCM_DBF_TEXT(TRACE, 5, __FUNCTION__); + if (ch == NULL) { + ctcm_pr_warn("chx_start ch=NULL\n"); + return; + } + if (ch->netdev == NULL) { + ctcm_pr_warn("chx_start dev=NULL, id=%s\n", ch->id); + return; + } + dev = ch->netdev; + + if (do_debug) + ctcm_pr_debug("%s: %s channel start\n", dev->name, + (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); + + if (ch->trans_skb != NULL) { + clear_normalized_cda(&ch->ccw[1]); + dev_kfree_skb(ch->trans_skb); + ch->trans_skb = NULL; + } + if (CHANNEL_DIRECTION(ch->flags) == READ) { + ch->ccw[1].cmd_code = CCW_CMD_READ; + ch->ccw[1].flags = CCW_FLAG_SLI; + ch->ccw[1].count = 0; + } else { + ch->ccw[1].cmd_code = CCW_CMD_WRITE; + ch->ccw[1].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[1].count = 0; + } + if (ctcm_checkalloc_buffer(ch)) { + ctcm_pr_notice("%s: %s trans_skb allocation delayed " + "until first transfer\n", dev->name, + (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); + } + + ch->ccw[0].cmd_code = CCW_CMD_PREPARE; + ch->ccw[0].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[0].count = 0; + ch->ccw[0].cda = 0; + ch->ccw[2].cmd_code = CCW_CMD_NOOP; /* jointed CE + DE */ + ch->ccw[2].flags = CCW_FLAG_SLI; + ch->ccw[2].count = 0; + ch->ccw[2].cda = 0; + memcpy(&ch->ccw[3], &ch->ccw[0], sizeof(struct ccw1) * 3); + ch->ccw[4].cda = 0; + ch->ccw[4].flags &= ~CCW_FLAG_IDA; + + fsm_newstate(fi, CTC_STATE_STARTWAIT); + fsm_addtimer(&ch->timer, 1000, CTC_EVENT_TIMER, ch); + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + rc = ccw_device_halt(ch->cdev, (unsigned long)ch); + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + if (rc != 0) { + if (rc != -EBUSY) + fsm_deltimer(&ch->timer); + ctcm_ccw_check_rc(ch, rc, "initial HaltIO"); + } +} + +/** + * Shutdown a channel. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_haltio(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + unsigned long saveflags = 0; + int rc; + int oldstate; + + CTCM_DBF_TEXT(TRACE, 2, __FUNCTION__); + fsm_deltimer(&ch->timer); + if (IS_MPC(ch)) + fsm_deltimer(&ch->sweep_timer); + + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + + if (event == CTC_EVENT_STOP) /* only for STOP not yet locked */ + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + /* Such conditional locking is undeterministic in + * static view. => ignore sparse warnings here. */ + oldstate = fsm_getstate(fi); + fsm_newstate(fi, CTC_STATE_TERM); + rc = ccw_device_halt(ch->cdev, (unsigned long)ch); + + if (event == CTC_EVENT_STOP) + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + /* see remark above about conditional locking */ + + if (rc != 0 && rc != -EBUSY) { + fsm_deltimer(&ch->timer); + if (event != CTC_EVENT_STOP) { + fsm_newstate(fi, oldstate); + ctcm_ccw_check_rc(ch, rc, (char *)__FUNCTION__); + } + } +} + +/** + * Cleanup helper for chx_fail and chx_stopped + * cleanup channels queue and notify interface statemachine. + * + * fi An instance of a channel statemachine. + * state The next state (depending on caller). + * ch The channel to operate on. + */ +static void ctcm_chx_cleanup(fsm_instance *fi, int state, + struct channel *ch) +{ + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + CTCM_DBF_TEXT(TRACE, 3, __FUNCTION__); + + fsm_deltimer(&ch->timer); + if (IS_MPC(ch)) + fsm_deltimer(&ch->sweep_timer); + + fsm_newstate(fi, state); + if (state == CTC_STATE_STOPPED && ch->trans_skb != NULL) { + clear_normalized_cda(&ch->ccw[1]); + dev_kfree_skb_any(ch->trans_skb); + ch->trans_skb = NULL; + } + + ch->th_seg = 0x00; + ch->th_seq_num = 0x00; + if (CHANNEL_DIRECTION(ch->flags) == READ) { + skb_queue_purge(&ch->io_queue); + fsm_event(priv->fsm, DEV_EVENT_RXDOWN, dev); + } else { + ctcm_purge_skb_queue(&ch->io_queue); + if (IS_MPC(ch)) + ctcm_purge_skb_queue(&ch->sweep_queue); + spin_lock(&ch->collect_lock); + ctcm_purge_skb_queue(&ch->collect_queue); + ch->collect_len = 0; + spin_unlock(&ch->collect_lock); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + } +} + +/** + * A channel has successfully been halted. + * Cleanup it's queue and notify interface statemachine. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_stopped(fsm_instance *fi, int event, void *arg) +{ + CTCM_DBF_TEXT(TRACE, 3, __FUNCTION__); + ctcm_chx_cleanup(fi, CTC_STATE_STOPPED, arg); +} + +/** + * A stop command from device statemachine arrived and we are in + * not operational mode. Set state to stopped. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_stop(fsm_instance *fi, int event, void *arg) +{ + fsm_newstate(fi, CTC_STATE_STOPPED); +} + +/** + * A machine check for no path, not operational status or gone device has + * happened. + * Cleanup queue and notify interface statemachine. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_fail(fsm_instance *fi, int event, void *arg) +{ + CTCM_DBF_TEXT(TRACE, 3, __FUNCTION__); + ctcm_chx_cleanup(fi, CTC_STATE_NOTOP, arg); +} + +/** + * Handle error during setup of channel. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_setuperr(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + /* + * Special case: Got UC_RCRESET on setmode. + * This means that remote side isn't setup. In this case + * simply retry after some 10 secs... + */ + if ((fsm_getstate(fi) == CTC_STATE_SETUPWAIT) && + ((event == CTC_EVENT_UC_RCRESET) || + (event == CTC_EVENT_UC_RSRESET))) { + fsm_newstate(fi, CTC_STATE_STARTRETRY); + fsm_deltimer(&ch->timer); + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + if (!IS_MPC(ch) && (CHANNEL_DIRECTION(ch->flags) == READ)) { + int rc = ccw_device_halt(ch->cdev, (unsigned long)ch); + if (rc != 0) + ctcm_ccw_check_rc(ch, rc, + "HaltIO in chx_setuperr"); + } + return; + } + + CTCM_DBF_TEXT_(ERROR, CTC_DBF_CRIT, + "%s : %s error during %s channel setup state=%s\n", + dev->name, ctc_ch_event_names[event], + (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX", + fsm_getstate_str(fi)); + + if (CHANNEL_DIRECTION(ch->flags) == READ) { + fsm_newstate(fi, CTC_STATE_RXERR); + fsm_event(priv->fsm, DEV_EVENT_RXDOWN, dev); + } else { + fsm_newstate(fi, CTC_STATE_TXERR); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + } +} + +/** + * Restart a channel after an error. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_restart(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + unsigned long saveflags = 0; + int oldstate; + int rc; + + CTCM_DBF_TEXT(TRACE, CTC_DBF_NOTICE, __FUNCTION__); + fsm_deltimer(&ch->timer); + ctcm_pr_debug("%s: %s channel restart\n", dev->name, + (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + oldstate = fsm_getstate(fi); + fsm_newstate(fi, CTC_STATE_STARTWAIT); + if (event == CTC_EVENT_TIMER) /* only for timer not yet locked */ + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + /* Such conditional locking is a known problem for + * sparse because its undeterministic in static view. + * Warnings should be ignored here. */ + rc = ccw_device_halt(ch->cdev, (unsigned long)ch); + if (event == CTC_EVENT_TIMER) + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + if (rc != 0) { + if (rc != -EBUSY) { + fsm_deltimer(&ch->timer); + fsm_newstate(fi, oldstate); + } + ctcm_ccw_check_rc(ch, rc, "HaltIO in ctcm_chx_restart"); + } +} + +/** + * Handle error during RX initial handshake (exchange of + * 0-length block header) + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_rxiniterr(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + CTCM_DBF_TEXT(SETUP, 3, __FUNCTION__); + if (event == CTC_EVENT_TIMER) { + if (!IS_MPCDEV(dev)) + /* TODO : check if MPC deletes timer somewhere */ + fsm_deltimer(&ch->timer); + ctcm_pr_debug("%s: Timeout during RX init handshake\n", + dev->name); + if (ch->retry++ < 3) + ctcm_chx_restart(fi, event, arg); + else { + fsm_newstate(fi, CTC_STATE_RXERR); + fsm_event(priv->fsm, DEV_EVENT_RXDOWN, dev); + } + } else + ctcm_pr_warn("%s: Error during RX init handshake\n", dev->name); +} + +/** + * Notify device statemachine if we gave up initialization + * of RX channel. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_rxinitfail(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + CTCM_DBF_TEXT(SETUP, 3, __FUNCTION__); + fsm_newstate(fi, CTC_STATE_RXERR); + ctcm_pr_warn("%s: RX busy. Initialization failed\n", dev->name); + fsm_event(priv->fsm, DEV_EVENT_RXDOWN, dev); +} + +/** + * Handle RX Unit check remote reset (remote disconnected) + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_rxdisc(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct channel *ch2; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + CTCM_DBF_DEV_NAME(TRACE, dev, "Got remote disconnect, re-initializing"); + fsm_deltimer(&ch->timer); + if (do_debug) + ctcm_pr_debug("%s: Got remote disconnect, " + "re-initializing ...\n", dev->name); + /* + * Notify device statemachine + */ + fsm_event(priv->fsm, DEV_EVENT_RXDOWN, dev); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + + fsm_newstate(fi, CTC_STATE_DTERM); + ch2 = priv->channel[WRITE]; + fsm_newstate(ch2->fsm, CTC_STATE_DTERM); + + ccw_device_halt(ch->cdev, (unsigned long)ch); + ccw_device_halt(ch2->cdev, (unsigned long)ch2); +} + +/** + * Handle error during TX channel initialization. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_txiniterr(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + if (event == CTC_EVENT_TIMER) { + fsm_deltimer(&ch->timer); + CTCM_DBF_DEV_NAME(ERROR, dev, + "Timeout during TX init handshake"); + if (ch->retry++ < 3) + ctcm_chx_restart(fi, event, arg); + else { + fsm_newstate(fi, CTC_STATE_TXERR); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + } + } else { + CTCM_DBF_TEXT_(ERROR, CTC_DBF_ERROR, + "%s : %s error during channel setup state=%s", + dev->name, ctc_ch_event_names[event], + fsm_getstate_str(fi)); + + ctcm_pr_warn("%s: Error during TX init handshake\n", dev->name); + } +} + +/** + * Handle TX timeout by retrying operation. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_txretry(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct sk_buff *skb; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + + fsm_deltimer(&ch->timer); + if (ch->retry++ > 3) { + struct mpc_group *gptr = priv->mpcg; + ctcm_pr_debug("%s: TX retry failed, restarting channel\n", + dev->name); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + /* call restart if not MPC or if MPC and mpcg fsm is ready. + use gptr as mpc indicator */ + if (!(gptr && (fsm_getstate(gptr->fsm) != MPCG_STATE_READY))) + ctcm_chx_restart(fi, event, arg); + goto done; + } + + ctcm_pr_debug("%s: TX retry %d\n", dev->name, ch->retry); + skb = skb_peek(&ch->io_queue); + if (skb) { + int rc = 0; + unsigned long saveflags = 0; + clear_normalized_cda(&ch->ccw[4]); + ch->ccw[4].count = skb->len; + if (set_normalized_cda(&ch->ccw[4], skb->data)) { + ctcm_pr_debug("%s: IDAL alloc failed, chan restart\n", + dev->name); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + ctcm_chx_restart(fi, event, arg); + goto done; + } + fsm_addtimer(&ch->timer, 1000, CTC_EVENT_TIMER, ch); + if (event == CTC_EVENT_TIMER) /* for TIMER not yet locked */ + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + /* Such conditional locking is a known problem for + * sparse because its undeterministic in static view. + * Warnings should be ignored here. */ + if (do_debug_ccw) + ctcmpc_dumpit((char *)&ch->ccw[3], + sizeof(struct ccw1) * 3); + + rc = ccw_device_start(ch->cdev, &ch->ccw[3], + (unsigned long)ch, 0xff, 0); + if (event == CTC_EVENT_TIMER) + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), + saveflags); + if (rc != 0) { + fsm_deltimer(&ch->timer); + ctcm_ccw_check_rc(ch, rc, "TX in chx_txretry"); + ctcm_purge_skb_queue(&ch->io_queue); + } + } +done: + return; +} + +/** + * Handle fatal errors during an I/O command. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcm_chx_iofatal(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + CTCM_DBF_TEXT(TRACE, 3, __FUNCTION__); + fsm_deltimer(&ch->timer); + ctcm_pr_warn("%s %s : unrecoverable channel error\n", + CTC_DRIVER_NAME, dev->name); + if (IS_MPC(ch)) { + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + } + + if (CHANNEL_DIRECTION(ch->flags) == READ) { + ctcm_pr_debug("%s: RX I/O error\n", dev->name); + fsm_newstate(fi, CTC_STATE_RXERR); + fsm_event(priv->fsm, DEV_EVENT_RXDOWN, dev); + } else { + ctcm_pr_debug("%s: TX I/O error\n", dev->name); + fsm_newstate(fi, CTC_STATE_TXERR); + fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); + } +} + +/* + * The ctcm statemachine for a channel. + */ +const fsm_node ch_fsm[] = { + { CTC_STATE_STOPPED, CTC_EVENT_STOP, ctcm_action_nop }, + { CTC_STATE_STOPPED, CTC_EVENT_START, ctcm_chx_start }, + { CTC_STATE_STOPPED, CTC_EVENT_FINSTAT, ctcm_action_nop }, + { CTC_STATE_STOPPED, CTC_EVENT_MC_FAIL, ctcm_action_nop }, + + { CTC_STATE_NOTOP, CTC_EVENT_STOP, ctcm_chx_stop }, + { CTC_STATE_NOTOP, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_NOTOP, CTC_EVENT_FINSTAT, ctcm_action_nop }, + { CTC_STATE_NOTOP, CTC_EVENT_MC_FAIL, ctcm_action_nop }, + { CTC_STATE_NOTOP, CTC_EVENT_MC_GOOD, ctcm_chx_start }, + + { CTC_STATE_STARTWAIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_STARTWAIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_STARTWAIT, CTC_EVENT_FINSTAT, ctcm_chx_setmode }, + { CTC_STATE_STARTWAIT, CTC_EVENT_TIMER, ctcm_chx_setuperr }, + { CTC_STATE_STARTWAIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_STARTWAIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_STARTRETRY, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_STARTRETRY, CTC_EVENT_TIMER, ctcm_chx_setmode }, + { CTC_STATE_STARTRETRY, CTC_EVENT_FINSTAT, ctcm_action_nop }, + { CTC_STATE_STARTRETRY, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_SETUPWAIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_FINSTAT, chx_firstio }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_TIMER, ctcm_chx_setmode }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_RXINIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_RXINIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_RXINIT, CTC_EVENT_FINSTAT, chx_rxidle }, + { CTC_STATE_RXINIT, CTC_EVENT_UC_RCRESET, ctcm_chx_rxiniterr }, + { CTC_STATE_RXINIT, CTC_EVENT_UC_RSRESET, ctcm_chx_rxiniterr }, + { CTC_STATE_RXINIT, CTC_EVENT_TIMER, ctcm_chx_rxiniterr }, + { CTC_STATE_RXINIT, CTC_EVENT_ATTNBUSY, ctcm_chx_rxinitfail }, + { CTC_STATE_RXINIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_RXINIT, CTC_EVENT_UC_ZERO, chx_firstio }, + { CTC_STATE_RXINIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_RXIDLE, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_RXIDLE, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_RXIDLE, CTC_EVENT_FINSTAT, chx_rx }, + { CTC_STATE_RXIDLE, CTC_EVENT_UC_RCRESET, ctcm_chx_rxdisc }, + { CTC_STATE_RXIDLE, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_RXIDLE, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_RXIDLE, CTC_EVENT_UC_ZERO, chx_rx }, + + { CTC_STATE_TXINIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXINIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_TXINIT, CTC_EVENT_FINSTAT, ctcm_chx_txidle }, + { CTC_STATE_TXINIT, CTC_EVENT_UC_RCRESET, ctcm_chx_txiniterr }, + { CTC_STATE_TXINIT, CTC_EVENT_UC_RSRESET, ctcm_chx_txiniterr }, + { CTC_STATE_TXINIT, CTC_EVENT_TIMER, ctcm_chx_txiniterr }, + { CTC_STATE_TXINIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TXINIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_TXIDLE, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXIDLE, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_TXIDLE, CTC_EVENT_FINSTAT, chx_firstio }, + { CTC_STATE_TXIDLE, CTC_EVENT_UC_RCRESET, ctcm_action_nop }, + { CTC_STATE_TXIDLE, CTC_EVENT_UC_RSRESET, ctcm_action_nop }, + { CTC_STATE_TXIDLE, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TXIDLE, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_TERM, CTC_EVENT_STOP, ctcm_action_nop }, + { CTC_STATE_TERM, CTC_EVENT_START, ctcm_chx_restart }, + { CTC_STATE_TERM, CTC_EVENT_FINSTAT, ctcm_chx_stopped }, + { CTC_STATE_TERM, CTC_EVENT_UC_RCRESET, ctcm_action_nop }, + { CTC_STATE_TERM, CTC_EVENT_UC_RSRESET, ctcm_action_nop }, + { CTC_STATE_TERM, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_DTERM, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_DTERM, CTC_EVENT_START, ctcm_chx_restart }, + { CTC_STATE_DTERM, CTC_EVENT_FINSTAT, ctcm_chx_setmode }, + { CTC_STATE_DTERM, CTC_EVENT_UC_RCRESET, ctcm_action_nop }, + { CTC_STATE_DTERM, CTC_EVENT_UC_RSRESET, ctcm_action_nop }, + { CTC_STATE_DTERM, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_TX, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TX, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_TX, CTC_EVENT_FINSTAT, chx_txdone }, + { CTC_STATE_TX, CTC_EVENT_UC_RCRESET, ctcm_chx_txretry }, + { CTC_STATE_TX, CTC_EVENT_UC_RSRESET, ctcm_chx_txretry }, + { CTC_STATE_TX, CTC_EVENT_TIMER, ctcm_chx_txretry }, + { CTC_STATE_TX, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TX, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_RXERR, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXERR, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXERR, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_RXERR, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, +}; + +int ch_fsm_len = ARRAY_SIZE(ch_fsm); + +/* + * MPC actions for mpc channel statemachine + * handling of MPC protocol requires extra + * statemachine and actions which are prefixed ctcmpc_ . + * The ctc_ch_states and ctc_ch_state_names, + * ctc_ch_events and ctc_ch_event_names share the ctcm definitions + * which are expanded by some elements. + */ + +/* + * Actions for mpc channel statemachine. + */ + +/** + * Normal data has been send. Free the corresponding + * skb (it's in io_queue), reset dev->tbusy and + * revert to idle state. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcmpc_chx_txdone(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct sk_buff *skb; + int first = 1; + int i; + struct timespec done_stamp; + __u32 data_space; + unsigned long duration; + struct sk_buff *peekskb; + int rc; + struct th_header *header; + struct pdu *p_header; + + if (do_debug) + ctcm_pr_debug("%s cp:%i enter: %s()\n", + dev->name, smp_processor_id(), __FUNCTION__); + + done_stamp = current_kernel_time(); /* xtime */ + duration = (done_stamp.tv_sec - ch->prof.send_stamp.tv_sec) * 1000000 + + (done_stamp.tv_nsec - ch->prof.send_stamp.tv_nsec) / 1000; + if (duration > ch->prof.tx_time) + ch->prof.tx_time = duration; + + if (ch->irb->scsw.count != 0) + ctcm_pr_debug("%s: TX not complete, remaining %d bytes\n", + dev->name, ch->irb->scsw.count); + fsm_deltimer(&ch->timer); + while ((skb = skb_dequeue(&ch->io_queue))) { + priv->stats.tx_packets++; + priv->stats.tx_bytes += skb->len - TH_HEADER_LENGTH; + if (first) { + priv->stats.tx_bytes += 2; + first = 0; + } + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + } + spin_lock(&ch->collect_lock); + clear_normalized_cda(&ch->ccw[4]); + + if ((ch->collect_len <= 0) || (grp->in_sweep != 0)) { + spin_unlock(&ch->collect_lock); + fsm_newstate(fi, CTC_STATE_TXIDLE); + goto done; + } + + if (ctcm_checkalloc_buffer(ch)) { + spin_unlock(&ch->collect_lock); + goto done; + } + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + if (ch->prof.maxmulti < (ch->collect_len + TH_HEADER_LENGTH)) + ch->prof.maxmulti = ch->collect_len + TH_HEADER_LENGTH; + if (ch->prof.maxcqueue < skb_queue_len(&ch->collect_queue)) + ch->prof.maxcqueue = skb_queue_len(&ch->collect_queue); + i = 0; + + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() building " + "trans_skb from collect_q \n", __FUNCTION__); + + data_space = grp->group_max_buflen - TH_HEADER_LENGTH; + + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() building trans_skb from collect_q" + " data_space:%04x\n", __FUNCTION__, data_space); + p_header = NULL; + while ((skb = skb_dequeue(&ch->collect_queue))) { + memcpy(skb_put(ch->trans_skb, skb->len), skb->data, skb->len); + p_header = (struct pdu *) + (skb_tail_pointer(ch->trans_skb) - skb->len); + p_header->pdu_flag = 0x00; + if (skb->protocol == ntohs(ETH_P_SNAP)) + p_header->pdu_flag |= 0x60; + else + p_header->pdu_flag |= 0x20; + + if (do_debug_data) { + ctcm_pr_debug("ctcmpc: %s()trans_skb len:%04x \n", + __FUNCTION__, ch->trans_skb->len); + ctcm_pr_debug("ctcmpc: %s() pdu header and data" + " for up to 32 bytes sent to vtam\n", + __FUNCTION__); + ctcmpc_dumpit((char *)p_header, + min_t(int, skb->len, 32)); + } + ch->collect_len -= skb->len; + data_space -= skb->len; + priv->stats.tx_packets++; + priv->stats.tx_bytes += skb->len; + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + peekskb = skb_peek(&ch->collect_queue); + if (peekskb->len > data_space) + break; + i++; + } + /* p_header points to the last one we handled */ + if (p_header) + p_header->pdu_flag |= PDU_LAST; /*Say it's the last one*/ + header = kzalloc(TH_HEADER_LENGTH, gfp_type()); + + if (!header) { + printk(KERN_WARNING "ctcmpc: OUT OF MEMORY IN %s()" + ": Data Lost \n", __FUNCTION__); + spin_unlock(&ch->collect_lock); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + header->th_ch_flag = TH_HAS_PDU; /* Normal data */ + ch->th_seq_num++; + header->th_seq_num = ch->th_seq_num; + + if (do_debug_data) + ctcm_pr_debug("%s: ToVTAM_th_seq= %08x\n" , + __FUNCTION__, ch->th_seq_num); + + memcpy(skb_push(ch->trans_skb, TH_HEADER_LENGTH), header, + TH_HEADER_LENGTH); /* put the TH on the packet */ + + kfree(header); + + if (do_debug_data) { + ctcm_pr_debug("ctcmpc: %s()trans_skb len:%04x \n", + __FUNCTION__, ch->trans_skb->len); + + ctcm_pr_debug("ctcmpc: %s() up-to-50 bytes of trans_skb " + "data to vtam from collect_q\n", __FUNCTION__); + ctcmpc_dumpit((char *)ch->trans_skb->data, + min_t(int, ch->trans_skb->len, 50)); + } + + spin_unlock(&ch->collect_lock); + clear_normalized_cda(&ch->ccw[1]); + if (set_normalized_cda(&ch->ccw[1], ch->trans_skb->data)) { + dev_kfree_skb_any(ch->trans_skb); + ch->trans_skb = NULL; + printk(KERN_WARNING + "ctcmpc: %s()CCW failure - data lost\n", + __FUNCTION__); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + return; + } + ch->ccw[1].count = ch->trans_skb->len; + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + ch->prof.send_stamp = current_kernel_time(); /* xtime */ + if (do_debug_ccw) + ctcmpc_dumpit((char *)&ch->ccw[0], sizeof(struct ccw1) * 3); + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + ch->prof.doios_multi++; + if (rc != 0) { + priv->stats.tx_dropped += i; + priv->stats.tx_errors += i; + fsm_deltimer(&ch->timer); + ctcm_ccw_check_rc(ch, rc, "chained TX"); + } +done: + ctcm_clear_busy(dev); + ctcm_pr_debug("ctcmpc exit: %s %s()\n", dev->name, __FUNCTION__); + return; +} + +/** + * Got normal data, check for sanity, queue it up, allocate new buffer + * trigger bottom half, and initiate next read. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcmpc_chx_rx(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct sk_buff *skb = ch->trans_skb; + struct sk_buff *new_skb; + unsigned long saveflags = 0; /* avoids compiler warning */ + int len = ch->max_bufsize - ch->irb->scsw.count; + + if (do_debug_data) { + CTCM_DBF_TEXT_(TRACE, CTC_DBF_DEBUG, "mpc_ch_rx %s cp:%i %s\n", + dev->name, smp_processor_id(), ch->id); + CTCM_DBF_TEXT_(TRACE, CTC_DBF_DEBUG, "mpc_ch_rx: maxbuf: %04x " + "len: %04x\n", ch->max_bufsize, len); + } + fsm_deltimer(&ch->timer); + + if (skb == NULL) { + ctcm_pr_debug("ctcmpc exit: %s() TRANS_SKB = NULL \n", + __FUNCTION__); + goto again; + } + + if (len < TH_HEADER_LENGTH) { + ctcm_pr_info("%s: got packet with invalid length %d\n", + dev->name, len); + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + } else { + /* must have valid th header or game over */ + __u32 block_len = len; + len = TH_HEADER_LENGTH + XID2_LENGTH + 4; + new_skb = __dev_alloc_skb(ch->max_bufsize, GFP_ATOMIC); + + if (new_skb == NULL) { + printk(KERN_INFO "ctcmpc:%s() NEW_SKB = NULL\n", + __FUNCTION__); + printk(KERN_WARNING "ctcmpc: %s() MEMORY ALLOC FAILED" + " - DATA LOST - MPC FAILED\n", + __FUNCTION__); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto again; + } + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_RESET: + case MPCG_STATE_INOP: + dev_kfree_skb_any(new_skb); + break; + case MPCG_STATE_FLOWC: + case MPCG_STATE_READY: + memcpy(skb_put(new_skb, block_len), + skb->data, block_len); + skb_queue_tail(&ch->io_queue, new_skb); + tasklet_schedule(&ch->ch_tasklet); + break; + default: + memcpy(skb_put(new_skb, len), skb->data, len); + skb_queue_tail(&ch->io_queue, new_skb); + tasklet_hi_schedule(&ch->ch_tasklet); + break; + } + } + +again: + switch (fsm_getstate(grp->fsm)) { + int rc, dolock; + case MPCG_STATE_FLOWC: + case MPCG_STATE_READY: + if (ctcm_checkalloc_buffer(ch)) + break; + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + ch->ccw[1].count = ch->max_bufsize; + if (do_debug_ccw) + ctcmpc_dumpit((char *)&ch->ccw[0], + sizeof(struct ccw1) * 3); + dolock = !in_irq(); + if (dolock) + spin_lock_irqsave( + get_ccwdev_lock(ch->cdev), saveflags); + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + if (dolock) /* see remark about conditional locking */ + spin_unlock_irqrestore( + get_ccwdev_lock(ch->cdev), saveflags); + if (rc != 0) + ctcm_ccw_check_rc(ch, rc, "normal RX"); + default: + break; + } + + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s %s(): ch=0x%p id=%s\n", + dev->name, __FUNCTION__, ch, ch->id); + +} + +/** + * Initialize connection by sending a __u16 of value 0. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +static void ctcmpc_chx_firstio(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + + if (do_debug) { + struct mpc_group *gptr = priv->mpcg; + ctcm_pr_debug("ctcmpc enter: %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + ctcm_pr_debug("%s() %s chstate:%i grpstate:%i chprotocol:%i\n", + __FUNCTION__, ch->id, fsm_getstate(fi), + fsm_getstate(gptr->fsm), ch->protocol); + } + if (fsm_getstate(fi) == CTC_STATE_TXIDLE) + MPC_DBF_DEV_NAME(TRACE, dev, "remote side issued READ? "); + + fsm_deltimer(&ch->timer); + if (ctcm_checkalloc_buffer(ch)) + goto done; + + switch (fsm_getstate(fi)) { + case CTC_STATE_STARTRETRY: + case CTC_STATE_SETUPWAIT: + if (CHANNEL_DIRECTION(ch->flags) == READ) { + ctcmpc_chx_rxidle(fi, event, arg); + } else { + fsm_newstate(fi, CTC_STATE_TXIDLE); + fsm_event(priv->fsm, DEV_EVENT_TXUP, dev); + } + goto done; + default: + break; + }; + + fsm_newstate(fi, (CHANNEL_DIRECTION(ch->flags) == READ) + ? CTC_STATE_RXINIT : CTC_STATE_TXINIT); + +done: + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + return; +} + +/** + * Got initial data, check it. If OK, + * notify device statemachine that we are up and + * running. + * + * fi An instance of a channel statemachine. + * event The event, just happened. + * arg Generic pointer, casted from channel * upon call. + */ +void ctcmpc_chx_rxidle(fsm_instance *fi, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + int rc; + unsigned long saveflags = 0; /* avoids compiler warning */ + + fsm_deltimer(&ch->timer); + ctcm_pr_debug("%s cp:%i enter: %s()\n", + dev->name, smp_processor_id(), __FUNCTION__); + if (do_debug) + ctcm_pr_debug("%s() %s chstate:%i grpstate:%i\n", + __FUNCTION__, ch->id, + fsm_getstate(fi), fsm_getstate(grp->fsm)); + + fsm_newstate(fi, CTC_STATE_RXIDLE); + /* XID processing complete */ + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_FLOWC: + case MPCG_STATE_READY: + if (ctcm_checkalloc_buffer(ch)) + goto done; + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + ch->ccw[1].count = ch->max_bufsize; + if (do_debug_ccw) + ctcmpc_dumpit((char *)&ch->ccw[0], + sizeof(struct ccw1) * 3); + if (event == CTC_EVENT_START) + /* see remark about conditional locking */ + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + rc = ccw_device_start(ch->cdev, &ch->ccw[0], + (unsigned long)ch, 0xff, 0); + if (event == CTC_EVENT_START) + spin_unlock_irqrestore( + get_ccwdev_lock(ch->cdev), saveflags); + if (rc != 0) { + fsm_newstate(fi, CTC_STATE_RXINIT); + ctcm_ccw_check_rc(ch, rc, "initial RX"); + goto done; + } + break; + default: + break; + } + + fsm_event(priv->fsm, DEV_EVENT_RXUP, dev); +done: + if (do_debug) + ctcm_pr_debug("ctcmpc exit: %s %s()\n", + dev->name, __FUNCTION__); + return; +} + +/* + * ctcmpc channel FSM action + * called from several points in ctcmpc_ch_fsm + * ctcmpc only + */ +static void ctcmpc_chx_attn(fsm_instance *fsm, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + + if (do_debug) { + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s" + "GrpState:%s ChState:%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id, + fsm_getstate_str(grp->fsm), + fsm_getstate_str(ch->fsm)); + } + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_XID2INITW: + /* ok..start yside xid exchanges */ + if (!ch->in_mpcgroup) + break; + if (fsm_getstate(ch->fsm) == CH_XID0_PENDING) { + fsm_deltimer(&grp->timer); + fsm_addtimer(&grp->timer, + MPC_XID_TIMEOUT_VALUE, + MPCG_EVENT_TIMER, dev); + fsm_event(grp->fsm, MPCG_EVENT_XID0DO, ch); + + } else if (fsm_getstate(ch->fsm) < CH_XID7_PENDING1) + /* attn rcvd before xid0 processed via bh */ + fsm_newstate(ch->fsm, CH_XID7_PENDING1); + break; + case MPCG_STATE_XID2INITX: + case MPCG_STATE_XID0IOWAIT: + case MPCG_STATE_XID0IOWAIX: + /* attn rcvd before xid0 processed on ch + but mid-xid0 processing for group */ + if (fsm_getstate(ch->fsm) < CH_XID7_PENDING1) + fsm_newstate(ch->fsm, CH_XID7_PENDING1); + break; + case MPCG_STATE_XID7INITW: + case MPCG_STATE_XID7INITX: + case MPCG_STATE_XID7INITI: + case MPCG_STATE_XID7INITZ: + switch (fsm_getstate(ch->fsm)) { + case CH_XID7_PENDING: + fsm_newstate(ch->fsm, CH_XID7_PENDING1); + break; + case CH_XID7_PENDING2: + fsm_newstate(ch->fsm, CH_XID7_PENDING3); + break; + } + fsm_event(grp->fsm, MPCG_EVENT_XID7DONE, dev); + break; + } + + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + return; + +} + +/* + * ctcmpc channel FSM action + * called from one point in ctcmpc_ch_fsm + * ctcmpc only + */ +static void ctcmpc_chx_attnbusy(fsm_instance *fsm, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + + ctcm_pr_debug("ctcmpc enter: %s %s() %s \nGrpState:%s ChState:%s\n", + dev->name, + __FUNCTION__, ch->id, + fsm_getstate_str(grp->fsm), + fsm_getstate_str(ch->fsm)); + + fsm_deltimer(&ch->timer); + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_XID0IOWAIT: + /* vtam wants to be primary.start yside xid exchanges*/ + /* only receive one attn-busy at a time so must not */ + /* change state each time */ + grp->changed_side = 1; + fsm_newstate(grp->fsm, MPCG_STATE_XID2INITW); + break; + case MPCG_STATE_XID2INITW: + if (grp->changed_side == 1) { + grp->changed_side = 2; + break; + } + /* process began via call to establish_conn */ + /* so must report failure instead of reverting */ + /* back to ready-for-xid passive state */ + if (grp->estconnfunc) + goto done; + /* this attnbusy is NOT the result of xside xid */ + /* collisions so yside must have been triggered */ + /* by an ATTN that was not intended to start XID */ + /* processing. Revert back to ready-for-xid and */ + /* wait for ATTN interrupt to signal xid start */ + if (fsm_getstate(ch->fsm) == CH_XID0_INPROGRESS) { + fsm_newstate(ch->fsm, CH_XID0_PENDING) ; + fsm_deltimer(&grp->timer); + goto done; + } + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + goto done; + case MPCG_STATE_XID2INITX: + /* XID2 was received before ATTN Busy for second + channel.Send yside xid for second channel. + */ + if (grp->changed_side == 1) { + grp->changed_side = 2; + break; + } + case MPCG_STATE_XID0IOWAIX: + case MPCG_STATE_XID7INITW: + case MPCG_STATE_XID7INITX: + case MPCG_STATE_XID7INITI: + case MPCG_STATE_XID7INITZ: + default: + /* multiple attn-busy indicates too out-of-sync */ + /* and they are certainly not being received as part */ + /* of valid mpc group negotiations.. */ + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + if (grp->changed_side == 1) { + fsm_deltimer(&grp->timer); + fsm_addtimer(&grp->timer, MPC_XID_TIMEOUT_VALUE, + MPCG_EVENT_TIMER, dev); + } + if (ch->in_mpcgroup) + fsm_event(grp->fsm, MPCG_EVENT_XID0DO, ch); + else + printk(KERN_WARNING "ctcmpc: %s() Not all channels have" + " been added to group\n", __FUNCTION__); + +done: + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s()%s ch=0x%p id=%s\n", + __FUNCTION__, dev->name, ch, ch->id); + + return; + +} + +/* + * ctcmpc channel FSM action + * called from several points in ctcmpc_ch_fsm + * ctcmpc only + */ +static void ctcmpc_chx_resend(fsm_instance *fsm, int event, void *arg) +{ + struct channel *ch = arg; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + + ctcm_pr_debug("ctcmpc enter: %s %s() %s \nGrpState:%s ChState:%s\n", + dev->name, __FUNCTION__, ch->id, + fsm_getstate_str(grp->fsm), + fsm_getstate_str(ch->fsm)); + + fsm_event(grp->fsm, MPCG_EVENT_XID0DO, ch); + + return; +} + +/* + * ctcmpc channel FSM action + * called from several points in ctcmpc_ch_fsm + * ctcmpc only + */ +static void ctcmpc_chx_send_sweep(fsm_instance *fsm, int event, void *arg) +{ + struct channel *ach = arg; + struct net_device *dev = ach->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct channel *wch = priv->channel[WRITE]; + struct channel *rch = priv->channel[READ]; + struct sk_buff *skb; + struct th_sweep *header; + int rc = 0; + unsigned long saveflags = 0; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ach, ach->id); + + if (grp->in_sweep == 0) + goto done; + + if (do_debug_data) { + ctcm_pr_debug("ctcmpc: %s() 1: ToVTAM_th_seq= %08x\n" , + __FUNCTION__, wch->th_seq_num); + ctcm_pr_debug("ctcmpc: %s() 1: FromVTAM_th_seq= %08x\n" , + __FUNCTION__, rch->th_seq_num); + } + + if (fsm_getstate(wch->fsm) != CTC_STATE_TXIDLE) { + /* give the previous IO time to complete */ + fsm_addtimer(&wch->sweep_timer, + 200, CTC_EVENT_RSWEEP_TIMER, wch); + goto done; + } + + skb = skb_dequeue(&wch->sweep_queue); + if (!skb) + goto done; + + if (set_normalized_cda(&wch->ccw[4], skb->data)) { + grp->in_sweep = 0; + ctcm_clear_busy_do(dev); + dev_kfree_skb_any(skb); + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + goto done; + } else { + atomic_inc(&skb->users); + skb_queue_tail(&wch->io_queue, skb); + } + + /* send out the sweep */ + wch->ccw[4].count = skb->len; + + header = (struct th_sweep *)skb->data; + switch (header->th.th_ch_flag) { + case TH_SWEEP_REQ: + grp->sweep_req_pend_num--; + break; + case TH_SWEEP_RESP: + grp->sweep_rsp_pend_num--; + break; + } + + header->sw.th_last_seq = wch->th_seq_num; + + if (do_debug_ccw) + ctcmpc_dumpit((char *)&wch->ccw[3], sizeof(struct ccw1) * 3); + + ctcm_pr_debug("ctcmpc: %s() sweep packet\n", __FUNCTION__); + ctcmpc_dumpit((char *)header, TH_SWEEP_LENGTH); + + fsm_addtimer(&wch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, wch); + fsm_newstate(wch->fsm, CTC_STATE_TX); + + spin_lock_irqsave(get_ccwdev_lock(wch->cdev), saveflags); + wch->prof.send_stamp = current_kernel_time(); /* xtime */ + rc = ccw_device_start(wch->cdev, &wch->ccw[3], + (unsigned long) wch, 0xff, 0); + spin_unlock_irqrestore(get_ccwdev_lock(wch->cdev), saveflags); + + if ((grp->sweep_req_pend_num == 0) && + (grp->sweep_rsp_pend_num == 0)) { + grp->in_sweep = 0; + rch->th_seq_num = 0x00; + wch->th_seq_num = 0x00; + ctcm_clear_busy_do(dev); + } + + if (do_debug_data) { + ctcm_pr_debug("ctcmpc: %s()2: ToVTAM_th_seq= %08x\n" , + __FUNCTION__, wch->th_seq_num); + ctcm_pr_debug("ctcmpc: %s()2: FromVTAM_th_seq= %08x\n" , + __FUNCTION__, rch->th_seq_num); + } + + if (rc != 0) + ctcm_ccw_check_rc(wch, rc, "send sweep"); + +done: + if (do_debug) + ctcm_pr_debug("ctcmpc exit: %s() %s\n", __FUNCTION__, ach->id); + return; +} + + +/* + * The ctcmpc statemachine for a channel. + */ + +const fsm_node ctcmpc_ch_fsm[] = { + { CTC_STATE_STOPPED, CTC_EVENT_STOP, ctcm_action_nop }, + { CTC_STATE_STOPPED, CTC_EVENT_START, ctcm_chx_start }, + { CTC_STATE_STOPPED, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_STOPPED, CTC_EVENT_FINSTAT, ctcm_action_nop }, + { CTC_STATE_STOPPED, CTC_EVENT_MC_FAIL, ctcm_action_nop }, + + { CTC_STATE_NOTOP, CTC_EVENT_STOP, ctcm_chx_stop }, + { CTC_STATE_NOTOP, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_NOTOP, CTC_EVENT_FINSTAT, ctcm_action_nop }, + { CTC_STATE_NOTOP, CTC_EVENT_MC_FAIL, ctcm_action_nop }, + { CTC_STATE_NOTOP, CTC_EVENT_MC_GOOD, ctcm_chx_start }, + { CTC_STATE_NOTOP, CTC_EVENT_UC_RCRESET, ctcm_chx_stop }, + { CTC_STATE_NOTOP, CTC_EVENT_UC_RSRESET, ctcm_chx_stop }, + { CTC_STATE_NOTOP, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + + { CTC_STATE_STARTWAIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_STARTWAIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_STARTWAIT, CTC_EVENT_FINSTAT, ctcm_chx_setmode }, + { CTC_STATE_STARTWAIT, CTC_EVENT_TIMER, ctcm_chx_setuperr }, + { CTC_STATE_STARTWAIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_STARTWAIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_STARTRETRY, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_STARTRETRY, CTC_EVENT_TIMER, ctcm_chx_setmode }, + { CTC_STATE_STARTRETRY, CTC_EVENT_FINSTAT, ctcm_chx_setmode }, + { CTC_STATE_STARTRETRY, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_STARTRETRY, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + + { CTC_STATE_SETUPWAIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_FINSTAT, ctcmpc_chx_firstio }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_TIMER, ctcm_chx_setmode }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_SETUPWAIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CTC_STATE_RXINIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_RXINIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_RXINIT, CTC_EVENT_FINSTAT, ctcmpc_chx_rxidle }, + { CTC_STATE_RXINIT, CTC_EVENT_UC_RCRESET, ctcm_chx_rxiniterr }, + { CTC_STATE_RXINIT, CTC_EVENT_UC_RSRESET, ctcm_chx_rxiniterr }, + { CTC_STATE_RXINIT, CTC_EVENT_TIMER, ctcm_chx_rxiniterr }, + { CTC_STATE_RXINIT, CTC_EVENT_ATTNBUSY, ctcm_chx_rxinitfail }, + { CTC_STATE_RXINIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_RXINIT, CTC_EVENT_UC_ZERO, ctcmpc_chx_firstio }, + { CTC_STATE_RXINIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + + { CH_XID0_PENDING, CTC_EVENT_FINSTAT, ctcm_action_nop }, + { CH_XID0_PENDING, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID0_PENDING, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID0_PENDING, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID0_PENDING, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID0_PENDING, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID0_PENDING, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID0_PENDING, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID0_PENDING, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID0_PENDING, CTC_EVENT_ATTNBUSY, ctcm_chx_iofatal }, + + { CH_XID0_INPROGRESS, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CH_XID0_INPROGRESS, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID0_INPROGRESS, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID0_INPROGRESS, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID0_INPROGRESS, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID0_INPROGRESS, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID0_INPROGRESS, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + { CH_XID0_INPROGRESS, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID0_INPROGRESS, CTC_EVENT_ATTNBUSY, ctcmpc_chx_attnbusy }, + { CH_XID0_INPROGRESS, CTC_EVENT_TIMER, ctcmpc_chx_resend }, + { CH_XID0_INPROGRESS, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CH_XID7_PENDING, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CH_XID7_PENDING, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID7_PENDING, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID7_PENDING, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID7_PENDING, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID7_PENDING, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID7_PENDING, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + { CH_XID7_PENDING, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING, CTC_EVENT_ATTNBUSY, ctcm_chx_iofatal }, + { CH_XID7_PENDING, CTC_EVENT_TIMER, ctcmpc_chx_resend }, + { CH_XID7_PENDING, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CH_XID7_PENDING1, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CH_XID7_PENDING1, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID7_PENDING1, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID7_PENDING1, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID7_PENDING1, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID7_PENDING1, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID7_PENDING1, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + { CH_XID7_PENDING1, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING1, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING1, CTC_EVENT_ATTNBUSY, ctcm_chx_iofatal }, + { CH_XID7_PENDING1, CTC_EVENT_TIMER, ctcmpc_chx_resend }, + { CH_XID7_PENDING1, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CH_XID7_PENDING2, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CH_XID7_PENDING2, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID7_PENDING2, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID7_PENDING2, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID7_PENDING2, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID7_PENDING2, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID7_PENDING2, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + { CH_XID7_PENDING2, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING2, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING2, CTC_EVENT_ATTNBUSY, ctcm_chx_iofatal }, + { CH_XID7_PENDING2, CTC_EVENT_TIMER, ctcmpc_chx_resend }, + { CH_XID7_PENDING2, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CH_XID7_PENDING3, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CH_XID7_PENDING3, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID7_PENDING3, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID7_PENDING3, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID7_PENDING3, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID7_PENDING3, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID7_PENDING3, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + { CH_XID7_PENDING3, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING3, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING3, CTC_EVENT_ATTNBUSY, ctcm_chx_iofatal }, + { CH_XID7_PENDING3, CTC_EVENT_TIMER, ctcmpc_chx_resend }, + { CH_XID7_PENDING3, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CH_XID7_PENDING4, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CH_XID7_PENDING4, CTC_EVENT_ATTN, ctcmpc_chx_attn }, + { CH_XID7_PENDING4, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CH_XID7_PENDING4, CTC_EVENT_START, ctcm_action_nop }, + { CH_XID7_PENDING4, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CH_XID7_PENDING4, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CH_XID7_PENDING4, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + { CH_XID7_PENDING4, CTC_EVENT_UC_RCRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING4, CTC_EVENT_UC_RSRESET, ctcm_chx_setuperr }, + { CH_XID7_PENDING4, CTC_EVENT_ATTNBUSY, ctcm_chx_iofatal }, + { CH_XID7_PENDING4, CTC_EVENT_TIMER, ctcmpc_chx_resend }, + { CH_XID7_PENDING4, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CTC_STATE_RXIDLE, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_RXIDLE, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_RXIDLE, CTC_EVENT_FINSTAT, ctcmpc_chx_rx }, + { CTC_STATE_RXIDLE, CTC_EVENT_UC_RCRESET, ctcm_chx_rxdisc }, + { CTC_STATE_RXIDLE, CTC_EVENT_UC_RSRESET, ctcm_chx_fail }, + { CTC_STATE_RXIDLE, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_RXIDLE, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_RXIDLE, CTC_EVENT_UC_ZERO, ctcmpc_chx_rx }, + + { CTC_STATE_TXINIT, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXINIT, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_TXINIT, CTC_EVENT_FINSTAT, ctcm_chx_txidle }, + { CTC_STATE_TXINIT, CTC_EVENT_UC_RCRESET, ctcm_chx_txiniterr }, + { CTC_STATE_TXINIT, CTC_EVENT_UC_RSRESET, ctcm_chx_txiniterr }, + { CTC_STATE_TXINIT, CTC_EVENT_TIMER, ctcm_chx_txiniterr }, + { CTC_STATE_TXINIT, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TXINIT, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_TXINIT, CTC_EVENT_RSWEEP_TIMER, ctcmpc_chx_send_sweep }, + + { CTC_STATE_TXIDLE, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXIDLE, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_TXIDLE, CTC_EVENT_FINSTAT, ctcmpc_chx_firstio }, + { CTC_STATE_TXIDLE, CTC_EVENT_UC_RCRESET, ctcm_chx_fail }, + { CTC_STATE_TXIDLE, CTC_EVENT_UC_RSRESET, ctcm_chx_fail }, + { CTC_STATE_TXIDLE, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TXIDLE, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_TXIDLE, CTC_EVENT_RSWEEP_TIMER, ctcmpc_chx_send_sweep }, + + { CTC_STATE_TERM, CTC_EVENT_STOP, ctcm_action_nop }, + { CTC_STATE_TERM, CTC_EVENT_START, ctcm_chx_restart }, + { CTC_STATE_TERM, CTC_EVENT_FINSTAT, ctcm_chx_stopped }, + { CTC_STATE_TERM, CTC_EVENT_UC_RCRESET, ctcm_action_nop }, + { CTC_STATE_TERM, CTC_EVENT_UC_RSRESET, ctcm_action_nop }, + { CTC_STATE_TERM, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_TERM, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + { CTC_STATE_TERM, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + + { CTC_STATE_DTERM, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_DTERM, CTC_EVENT_START, ctcm_chx_restart }, + { CTC_STATE_DTERM, CTC_EVENT_FINSTAT, ctcm_chx_setmode }, + { CTC_STATE_DTERM, CTC_EVENT_UC_RCRESET, ctcm_action_nop }, + { CTC_STATE_DTERM, CTC_EVENT_UC_RSRESET, ctcm_action_nop }, + { CTC_STATE_DTERM, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_DTERM, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + + { CTC_STATE_TX, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TX, CTC_EVENT_START, ctcm_action_nop }, + { CTC_STATE_TX, CTC_EVENT_FINSTAT, ctcmpc_chx_txdone }, + { CTC_STATE_TX, CTC_EVENT_UC_RCRESET, ctcm_chx_fail }, + { CTC_STATE_TX, CTC_EVENT_UC_RSRESET, ctcm_chx_fail }, + { CTC_STATE_TX, CTC_EVENT_TIMER, ctcm_chx_txretry }, + { CTC_STATE_TX, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TX, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_TX, CTC_EVENT_RSWEEP_TIMER, ctcmpc_chx_send_sweep }, + { CTC_STATE_TX, CTC_EVENT_IO_EBUSY, ctcm_chx_fail }, + + { CTC_STATE_RXERR, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXERR, CTC_EVENT_STOP, ctcm_chx_haltio }, + { CTC_STATE_TXERR, CTC_EVENT_IO_ENODEV, ctcm_chx_iofatal }, + { CTC_STATE_TXERR, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, + { CTC_STATE_RXERR, CTC_EVENT_MC_FAIL, ctcm_chx_fail }, +}; + +int mpc_ch_fsm_len = ARRAY_SIZE(ctcmpc_ch_fsm); + +/* + * Actions for interface - statemachine. + */ + +/** + * Startup channels by sending CTC_EVENT_START to each channel. + * + * fi An instance of an interface statemachine. + * event The event, just happened. + * arg Generic pointer, casted from struct net_device * upon call. + */ +static void dev_action_start(fsm_instance *fi, int event, void *arg) +{ + struct net_device *dev = arg; + struct ctcm_priv *priv = dev->priv; + int direction; + + CTCMY_DBF_DEV_NAME(SETUP, dev, ""); + + fsm_deltimer(&priv->restart_timer); + fsm_newstate(fi, DEV_STATE_STARTWAIT_RXTX); + if (IS_MPC(priv)) + priv->mpcg->channels_terminating = 0; + for (direction = READ; direction <= WRITE; direction++) { + struct channel *ch = priv->channel[direction]; + fsm_event(ch->fsm, CTC_EVENT_START, ch); + } +} + +/** + * Shutdown channels by sending CTC_EVENT_STOP to each channel. + * + * fi An instance of an interface statemachine. + * event The event, just happened. + * arg Generic pointer, casted from struct net_device * upon call. + */ +static void dev_action_stop(fsm_instance *fi, int event, void *arg) +{ + int direction; + struct net_device *dev = arg; + struct ctcm_priv *priv = dev->priv; + + CTCMY_DBF_DEV_NAME(SETUP, dev, ""); + + fsm_newstate(fi, DEV_STATE_STOPWAIT_RXTX); + for (direction = READ; direction <= WRITE; direction++) { + struct channel *ch = priv->channel[direction]; + fsm_event(ch->fsm, CTC_EVENT_STOP, ch); + ch->th_seq_num = 0x00; + if (do_debug) + ctcm_pr_debug("ctcm: %s() CH_th_seq= %08x\n", + __FUNCTION__, ch->th_seq_num); + } + if (IS_MPC(priv)) + fsm_newstate(priv->mpcg->fsm, MPCG_STATE_RESET); +} + +static void dev_action_restart(fsm_instance *fi, int event, void *arg) +{ + int restart_timer; + struct net_device *dev = arg; + struct ctcm_priv *priv = dev->priv; + + CTCMY_DBF_DEV_NAME(TRACE, dev, ""); + + if (IS_MPC(priv)) { + ctcm_pr_info("ctcm: %s Restarting Device and " + "MPC Group in 5 seconds\n", + dev->name); + restart_timer = CTCM_TIME_1_SEC; + } else { + ctcm_pr_info("%s: Restarting\n", dev->name); + restart_timer = CTCM_TIME_5_SEC; + } + + dev_action_stop(fi, event, arg); + fsm_event(priv->fsm, DEV_EVENT_STOP, dev); + if (IS_MPC(priv)) + fsm_newstate(priv->mpcg->fsm, MPCG_STATE_RESET); + + /* going back into start sequence too quickly can */ + /* result in the other side becoming unreachable due */ + /* to sense reported when IO is aborted */ + fsm_addtimer(&priv->restart_timer, restart_timer, + DEV_EVENT_START, dev); +} + +/** + * Called from channel statemachine + * when a channel is up and running. + * + * fi An instance of an interface statemachine. + * event The event, just happened. + * arg Generic pointer, casted from struct net_device * upon call. + */ +static void dev_action_chup(fsm_instance *fi, int event, void *arg) +{ + struct net_device *dev = arg; + struct ctcm_priv *priv = dev->priv; + + CTCMY_DBF_DEV_NAME(SETUP, dev, ""); + + switch (fsm_getstate(fi)) { + case DEV_STATE_STARTWAIT_RXTX: + if (event == DEV_EVENT_RXUP) + fsm_newstate(fi, DEV_STATE_STARTWAIT_TX); + else + fsm_newstate(fi, DEV_STATE_STARTWAIT_RX); + break; + case DEV_STATE_STARTWAIT_RX: + if (event == DEV_EVENT_RXUP) { + fsm_newstate(fi, DEV_STATE_RUNNING); + ctcm_pr_info("%s: connected with remote side\n", + dev->name); + ctcm_clear_busy(dev); + } + break; + case DEV_STATE_STARTWAIT_TX: + if (event == DEV_EVENT_TXUP) { + fsm_newstate(fi, DEV_STATE_RUNNING); + ctcm_pr_info("%s: connected with remote side\n", + dev->name); + ctcm_clear_busy(dev); + } + break; + case DEV_STATE_STOPWAIT_TX: + if (event == DEV_EVENT_RXUP) + fsm_newstate(fi, DEV_STATE_STOPWAIT_RXTX); + break; + case DEV_STATE_STOPWAIT_RX: + if (event == DEV_EVENT_TXUP) + fsm_newstate(fi, DEV_STATE_STOPWAIT_RXTX); + break; + } + + if (IS_MPC(priv)) { + if (event == DEV_EVENT_RXUP) + mpc_channel_action(priv->channel[READ], + READ, MPC_CHANNEL_ADD); + else + mpc_channel_action(priv->channel[WRITE], + WRITE, MPC_CHANNEL_ADD); + } +} + +/** + * Called from device statemachine + * when a channel has been shutdown. + * + * fi An instance of an interface statemachine. + * event The event, just happened. + * arg Generic pointer, casted from struct net_device * upon call. + */ +static void dev_action_chdown(fsm_instance *fi, int event, void *arg) +{ + + struct net_device *dev = arg; + struct ctcm_priv *priv = dev->priv; + + CTCMY_DBF_DEV_NAME(SETUP, dev, ""); + + switch (fsm_getstate(fi)) { + case DEV_STATE_RUNNING: + if (event == DEV_EVENT_TXDOWN) + fsm_newstate(fi, DEV_STATE_STARTWAIT_TX); + else + fsm_newstate(fi, DEV_STATE_STARTWAIT_RX); + break; + case DEV_STATE_STARTWAIT_RX: + if (event == DEV_EVENT_TXDOWN) + fsm_newstate(fi, DEV_STATE_STARTWAIT_RXTX); + break; + case DEV_STATE_STARTWAIT_TX: + if (event == DEV_EVENT_RXDOWN) + fsm_newstate(fi, DEV_STATE_STARTWAIT_RXTX); + break; + case DEV_STATE_STOPWAIT_RXTX: + if (event == DEV_EVENT_TXDOWN) + fsm_newstate(fi, DEV_STATE_STOPWAIT_RX); + else + fsm_newstate(fi, DEV_STATE_STOPWAIT_TX); + break; + case DEV_STATE_STOPWAIT_RX: + if (event == DEV_EVENT_RXDOWN) + fsm_newstate(fi, DEV_STATE_STOPPED); + break; + case DEV_STATE_STOPWAIT_TX: + if (event == DEV_EVENT_TXDOWN) + fsm_newstate(fi, DEV_STATE_STOPPED); + break; + } + if (IS_MPC(priv)) { + if (event == DEV_EVENT_RXDOWN) + mpc_channel_action(priv->channel[READ], + READ, MPC_CHANNEL_REMOVE); + else + mpc_channel_action(priv->channel[WRITE], + WRITE, MPC_CHANNEL_REMOVE); + } +} + +const fsm_node dev_fsm[] = { + { DEV_STATE_STOPPED, DEV_EVENT_START, dev_action_start }, + { DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_START, dev_action_start }, + { DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_RXDOWN, dev_action_chdown }, + { DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_TXDOWN, dev_action_chdown }, + { DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_RESTART, dev_action_restart }, + { DEV_STATE_STOPWAIT_RX, DEV_EVENT_START, dev_action_start }, + { DEV_STATE_STOPWAIT_RX, DEV_EVENT_RXUP, dev_action_chup }, + { DEV_STATE_STOPWAIT_RX, DEV_EVENT_TXUP, dev_action_chup }, + { DEV_STATE_STOPWAIT_RX, DEV_EVENT_RXDOWN, dev_action_chdown }, + { DEV_STATE_STOPWAIT_RX, DEV_EVENT_RESTART, dev_action_restart }, + { DEV_STATE_STOPWAIT_TX, DEV_EVENT_START, dev_action_start }, + { DEV_STATE_STOPWAIT_TX, DEV_EVENT_RXUP, dev_action_chup }, + { DEV_STATE_STOPWAIT_TX, DEV_EVENT_TXUP, dev_action_chup }, + { DEV_STATE_STOPWAIT_TX, DEV_EVENT_TXDOWN, dev_action_chdown }, + { DEV_STATE_STOPWAIT_TX, DEV_EVENT_RESTART, dev_action_restart }, + { DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_STOP, dev_action_stop }, + { DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_RXUP, dev_action_chup }, + { DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_TXUP, dev_action_chup }, + { DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_RXDOWN, dev_action_chdown }, + { DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_TXDOWN, dev_action_chdown }, + { DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_RESTART, dev_action_restart }, + { DEV_STATE_STARTWAIT_TX, DEV_EVENT_STOP, dev_action_stop }, + { DEV_STATE_STARTWAIT_TX, DEV_EVENT_RXUP, dev_action_chup }, + { DEV_STATE_STARTWAIT_TX, DEV_EVENT_TXUP, dev_action_chup }, + { DEV_STATE_STARTWAIT_TX, DEV_EVENT_RXDOWN, dev_action_chdown }, + { DEV_STATE_STARTWAIT_TX, DEV_EVENT_RESTART, dev_action_restart }, + { DEV_STATE_STARTWAIT_RX, DEV_EVENT_STOP, dev_action_stop }, + { DEV_STATE_STARTWAIT_RX, DEV_EVENT_RXUP, dev_action_chup }, + { DEV_STATE_STARTWAIT_RX, DEV_EVENT_TXUP, dev_action_chup }, + { DEV_STATE_STARTWAIT_RX, DEV_EVENT_TXDOWN, dev_action_chdown }, + { DEV_STATE_STARTWAIT_RX, DEV_EVENT_RESTART, dev_action_restart }, + { DEV_STATE_RUNNING, DEV_EVENT_STOP, dev_action_stop }, + { DEV_STATE_RUNNING, DEV_EVENT_RXDOWN, dev_action_chdown }, + { DEV_STATE_RUNNING, DEV_EVENT_TXDOWN, dev_action_chdown }, + { DEV_STATE_RUNNING, DEV_EVENT_TXUP, ctcm_action_nop }, + { DEV_STATE_RUNNING, DEV_EVENT_RXUP, ctcm_action_nop }, + { DEV_STATE_RUNNING, DEV_EVENT_RESTART, dev_action_restart }, +}; + +int dev_fsm_len = ARRAY_SIZE(dev_fsm); + +/* --- This is the END my friend --- */ + diff --git a/drivers/s390/net/ctcm_fsms.h b/drivers/s390/net/ctcm_fsms.h new file mode 100644 index 000000000000..2326aba9807a --- /dev/null +++ b/drivers/s390/net/ctcm_fsms.h @@ -0,0 +1,359 @@ +/* + * drivers/s390/net/ctcm_fsms.h + * + * Copyright IBM Corp. 2001, 2007 + * Authors: Fritz Elfert (felfert@millenux.com) + * Peter Tiedemann (ptiedem@de.ibm.com) + * MPC additions : + * Belinda Thompson (belindat@us.ibm.com) + * Andy Richter (richtera@us.ibm.com) + */ +#ifndef _CTCM_FSMS_H_ +#define _CTCM_FSMS_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include + +#include "fsm.h" +#include "cu3088.h" +#include "ctcm_main.h" + +/* + * Definitions for the channel statemachine(s) for ctc and ctcmpc + * + * To allow better kerntyping, prefix-less definitions for channel states + * and channel events have been replaced : + * ch_event... -> ctc_ch_event... + * CH_EVENT... -> CTC_EVENT... + * ch_state... -> ctc_ch_state... + * CH_STATE... -> CTC_STATE... + */ +/* + * Events of the channel statemachine(s) for ctc and ctcmpc + */ +enum ctc_ch_events { + /* + * Events, representing return code of + * I/O operations (ccw_device_start, ccw_device_halt et al.) + */ + CTC_EVENT_IO_SUCCESS, + CTC_EVENT_IO_EBUSY, + CTC_EVENT_IO_ENODEV, + CTC_EVENT_IO_UNKNOWN, + + CTC_EVENT_ATTNBUSY, + CTC_EVENT_ATTN, + CTC_EVENT_BUSY, + /* + * Events, representing unit-check + */ + CTC_EVENT_UC_RCRESET, + CTC_EVENT_UC_RSRESET, + CTC_EVENT_UC_TXTIMEOUT, + CTC_EVENT_UC_TXPARITY, + CTC_EVENT_UC_HWFAIL, + CTC_EVENT_UC_RXPARITY, + CTC_EVENT_UC_ZERO, + CTC_EVENT_UC_UNKNOWN, + /* + * Events, representing subchannel-check + */ + CTC_EVENT_SC_UNKNOWN, + /* + * Events, representing machine checks + */ + CTC_EVENT_MC_FAIL, + CTC_EVENT_MC_GOOD, + /* + * Event, representing normal IRQ + */ + CTC_EVENT_IRQ, + CTC_EVENT_FINSTAT, + /* + * Event, representing timer expiry. + */ + CTC_EVENT_TIMER, + /* + * Events, representing commands from upper levels. + */ + CTC_EVENT_START, + CTC_EVENT_STOP, + CTC_NR_EVENTS, + /* + * additional MPC events + */ + CTC_EVENT_SEND_XID = CTC_NR_EVENTS, + CTC_EVENT_RSWEEP_TIMER, + /* + * MUST be always the last element!! + */ + CTC_MPC_NR_EVENTS, +}; + +/* + * States of the channel statemachine(s) for ctc and ctcmpc. + */ +enum ctc_ch_states { + /* + * Channel not assigned to any device, + * initial state, direction invalid + */ + CTC_STATE_IDLE, + /* + * Channel assigned but not operating + */ + CTC_STATE_STOPPED, + CTC_STATE_STARTWAIT, + CTC_STATE_STARTRETRY, + CTC_STATE_SETUPWAIT, + CTC_STATE_RXINIT, + CTC_STATE_TXINIT, + CTC_STATE_RX, + CTC_STATE_TX, + CTC_STATE_RXIDLE, + CTC_STATE_TXIDLE, + CTC_STATE_RXERR, + CTC_STATE_TXERR, + CTC_STATE_TERM, + CTC_STATE_DTERM, + CTC_STATE_NOTOP, + CTC_NR_STATES, /* MUST be the last element of non-expanded states */ + /* + * additional MPC states + */ + CH_XID0_PENDING = CTC_NR_STATES, + CH_XID0_INPROGRESS, + CH_XID7_PENDING, + CH_XID7_PENDING1, + CH_XID7_PENDING2, + CH_XID7_PENDING3, + CH_XID7_PENDING4, + CTC_MPC_NR_STATES, /* MUST be the last element of expanded mpc states */ +}; + +extern const char *ctc_ch_event_names[]; + +extern const char *ctc_ch_state_names[]; + +void ctcm_ccw_check_rc(struct channel *ch, int rc, char *msg); +void ctcm_purge_skb_queue(struct sk_buff_head *q); +void fsm_action_nop(fsm_instance *fi, int event, void *arg); + +/* + * ----- non-static actions for ctcm channel statemachine ----- + * + */ +void ctcm_chx_txidle(fsm_instance *fi, int event, void *arg); + +/* + * ----- FSM (state/event/action) of the ctcm channel statemachine ----- + */ +extern const fsm_node ch_fsm[]; +extern int ch_fsm_len; + + +/* + * ----- non-static actions for ctcmpc channel statemachine ---- + * + */ +/* shared : +void ctcm_chx_txidle(fsm_instance * fi, int event, void *arg); + */ +void ctcmpc_chx_rxidle(fsm_instance *fi, int event, void *arg); + +/* + * ----- FSM (state/event/action) of the ctcmpc channel statemachine ----- + */ +extern const fsm_node ctcmpc_ch_fsm[]; +extern int mpc_ch_fsm_len; + +/* + * Definitions for the device interface statemachine for ctc and mpc + */ + +/* + * States of the device interface statemachine. + */ +enum dev_states { + DEV_STATE_STOPPED, + DEV_STATE_STARTWAIT_RXTX, + DEV_STATE_STARTWAIT_RX, + DEV_STATE_STARTWAIT_TX, + DEV_STATE_STOPWAIT_RXTX, + DEV_STATE_STOPWAIT_RX, + DEV_STATE_STOPWAIT_TX, + DEV_STATE_RUNNING, + /* + * MUST be always the last element!! + */ + CTCM_NR_DEV_STATES +}; + +extern const char *dev_state_names[]; + +/* + * Events of the device interface statemachine. + * ctcm and ctcmpc + */ +enum dev_events { + DEV_EVENT_START, + DEV_EVENT_STOP, + DEV_EVENT_RXUP, + DEV_EVENT_TXUP, + DEV_EVENT_RXDOWN, + DEV_EVENT_TXDOWN, + DEV_EVENT_RESTART, + /* + * MUST be always the last element!! + */ + CTCM_NR_DEV_EVENTS +}; + +extern const char *dev_event_names[]; + +/* + * Actions for the device interface statemachine. + * ctc and ctcmpc + */ +/* +static void dev_action_start(fsm_instance * fi, int event, void *arg); +static void dev_action_stop(fsm_instance * fi, int event, void *arg); +static void dev_action_restart(fsm_instance *fi, int event, void *arg); +static void dev_action_chup(fsm_instance * fi, int event, void *arg); +static void dev_action_chdown(fsm_instance * fi, int event, void *arg); +*/ + +/* + * The (state/event/action) fsm table of the device interface statemachine. + * ctcm and ctcmpc + */ +extern const fsm_node dev_fsm[]; +extern int dev_fsm_len; + + +/* + * Definitions for the MPC Group statemachine + */ + +/* + * MPC Group Station FSM States + +State Name When In This State +====================== ======================================= +MPCG_STATE_RESET Initial State When Driver Loaded + We receive and send NOTHING + +MPCG_STATE_INOP INOP Received. + Group level non-recoverable error + +MPCG_STATE_READY XID exchanges for at least 1 write and + 1 read channel have completed. + Group is ready for data transfer. + +States from ctc_mpc_alloc_channel +============================================================== +MPCG_STATE_XID2INITW Awaiting XID2(0) Initiation + ATTN from other side will start + XID negotiations. + Y-side protocol only. + +MPCG_STATE_XID2INITX XID2(0) negotiations are in progress. + At least 1, but not all, XID2(0)'s + have been received from partner. + +MPCG_STATE_XID7INITW XID2(0) complete + No XID2(7)'s have yet been received. + XID2(7) negotiations pending. + +MPCG_STATE_XID7INITX XID2(7) negotiations in progress. + At least 1, but not all, XID2(7)'s + have been received from partner. + +MPCG_STATE_XID7INITF XID2(7) negotiations complete. + Transitioning to READY. + +MPCG_STATE_READY Ready for Data Transfer. + + +States from ctc_mpc_establish_connectivity call +============================================================== +MPCG_STATE_XID0IOWAIT Initiating XID2(0) negotiations. + X-side protocol only. + ATTN-BUSY from other side will convert + this to Y-side protocol and the + ctc_mpc_alloc_channel flow will begin. + +MPCG_STATE_XID0IOWAIX XID2(0) negotiations are in progress. + At least 1, but not all, XID2(0)'s + have been received from partner. + +MPCG_STATE_XID7INITI XID2(0) complete + No XID2(7)'s have yet been received. + XID2(7) negotiations pending. + +MPCG_STATE_XID7INITZ XID2(7) negotiations in progress. + At least 1, but not all, XID2(7)'s + have been received from partner. + +MPCG_STATE_XID7INITF XID2(7) negotiations complete. + Transitioning to READY. + +MPCG_STATE_READY Ready for Data Transfer. + +*/ + +enum mpcg_events { + MPCG_EVENT_INOP, + MPCG_EVENT_DISCONC, + MPCG_EVENT_XID0DO, + MPCG_EVENT_XID2, + MPCG_EVENT_XID2DONE, + MPCG_EVENT_XID7DONE, + MPCG_EVENT_TIMER, + MPCG_EVENT_DOIO, + MPCG_NR_EVENTS, +}; + +enum mpcg_states { + MPCG_STATE_RESET, + MPCG_STATE_INOP, + MPCG_STATE_XID2INITW, + MPCG_STATE_XID2INITX, + MPCG_STATE_XID7INITW, + MPCG_STATE_XID7INITX, + MPCG_STATE_XID0IOWAIT, + MPCG_STATE_XID0IOWAIX, + MPCG_STATE_XID7INITI, + MPCG_STATE_XID7INITZ, + MPCG_STATE_XID7INITF, + MPCG_STATE_FLOWC, + MPCG_STATE_READY, + MPCG_NR_STATES, +}; + +#endif +/* --- This is the END my friend --- */ diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c new file mode 100644 index 000000000000..d52843da4f55 --- /dev/null +++ b/drivers/s390/net/ctcm_main.c @@ -0,0 +1,1772 @@ +/* + * drivers/s390/net/ctcm_main.c + * + * Copyright IBM Corp. 2001, 2007 + * Author(s): + * Original CTC driver(s): + * Fritz Elfert (felfert@millenux.com) + * Dieter Wellerdiek (wel@de.ibm.com) + * Martin Schwidefsky (schwidefsky@de.ibm.com) + * Denis Joseph Barrow (barrow_dj@yahoo.com) + * Jochen Roehrig (roehrig@de.ibm.com) + * Cornelia Huck + * MPC additions: + * Belinda Thompson (belindat@us.ibm.com) + * Andy Richter (richtera@us.ibm.com) + * Revived by: + * Peter Tiedemann (ptiedem@de.ibm.com) + */ + +#undef DEBUG +#undef DEBUGDATA +#undef DEBUGCCW + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include + +#include "cu3088.h" +#include "ctcm_fsms.h" +#include "ctcm_main.h" + +/* Some common global variables */ + +/* + * Linked list of all detected channels. + */ +struct channel *channels; + +/** + * Unpack a just received skb and hand it over to + * upper layers. + * + * ch The channel where this skb has been received. + * pskb The received skb. + */ +void ctcm_unpack_skb(struct channel *ch, struct sk_buff *pskb) +{ + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + __u16 len = *((__u16 *) pskb->data); + + skb_put(pskb, 2 + LL_HEADER_LENGTH); + skb_pull(pskb, 2); + pskb->dev = dev; + pskb->ip_summed = CHECKSUM_UNNECESSARY; + while (len > 0) { + struct sk_buff *skb; + int skblen; + struct ll_header *header = (struct ll_header *)pskb->data; + + skb_pull(pskb, LL_HEADER_LENGTH); + if ((ch->protocol == CTCM_PROTO_S390) && + (header->type != ETH_P_IP)) { + + if (!(ch->logflags & LOG_FLAG_ILLEGALPKT)) { + /* + * Check packet type only if we stick strictly + * to S/390's protocol of OS390. This only + * supports IP. Otherwise allow any packet + * type. + */ + ctcm_pr_warn("%s Illegal packet type 0x%04x " + "received, dropping\n", + dev->name, header->type); + ch->logflags |= LOG_FLAG_ILLEGALPKT; + } + + priv->stats.rx_dropped++; + priv->stats.rx_frame_errors++; + return; + } + pskb->protocol = ntohs(header->type); + if (header->length <= LL_HEADER_LENGTH) { + if (!(ch->logflags & LOG_FLAG_ILLEGALSIZE)) { + ctcm_pr_warn( + "%s Illegal packet size %d " + "received (MTU=%d blocklen=%d), " + "dropping\n", dev->name, header->length, + dev->mtu, len); + ch->logflags |= LOG_FLAG_ILLEGALSIZE; + } + + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + return; + } + header->length -= LL_HEADER_LENGTH; + len -= LL_HEADER_LENGTH; + if ((header->length > skb_tailroom(pskb)) || + (header->length > len)) { + if (!(ch->logflags & LOG_FLAG_OVERRUN)) { + ctcm_pr_warn( + "%s Illegal packet size %d (beyond the" + " end of received data), dropping\n", + dev->name, header->length); + ch->logflags |= LOG_FLAG_OVERRUN; + } + + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + return; + } + skb_put(pskb, header->length); + skb_reset_mac_header(pskb); + len -= header->length; + skb = dev_alloc_skb(pskb->len); + if (!skb) { + if (!(ch->logflags & LOG_FLAG_NOMEM)) { + ctcm_pr_warn( + "%s Out of memory in ctcm_unpack_skb\n", + dev->name); + ch->logflags |= LOG_FLAG_NOMEM; + } + priv->stats.rx_dropped++; + return; + } + skb_copy_from_linear_data(pskb, skb_put(skb, pskb->len), + pskb->len); + skb_reset_mac_header(skb); + skb->dev = pskb->dev; + skb->protocol = pskb->protocol; + pskb->ip_summed = CHECKSUM_UNNECESSARY; + skblen = skb->len; + /* + * reset logflags + */ + ch->logflags = 0; + priv->stats.rx_packets++; + priv->stats.rx_bytes += skblen; + netif_rx_ni(skb); + dev->last_rx = jiffies; + if (len > 0) { + skb_pull(pskb, header->length); + if (skb_tailroom(pskb) < LL_HEADER_LENGTH) { + if (!(ch->logflags & LOG_FLAG_OVERRUN)) { + CTCM_DBF_DEV_NAME(TRACE, dev, + "Overrun in ctcm_unpack_skb"); + ch->logflags |= LOG_FLAG_OVERRUN; + } + return; + } + skb_put(pskb, LL_HEADER_LENGTH); + } + } +} + +/** + * Release a specific channel in the channel list. + * + * ch Pointer to channel struct to be released. + */ +static void channel_free(struct channel *ch) +{ + CTCM_DBF_TEXT(TRACE, 2, __FUNCTION__); + ch->flags &= ~CHANNEL_FLAGS_INUSE; + fsm_newstate(ch->fsm, CTC_STATE_IDLE); +} + +/** + * Remove a specific channel in the channel list. + * + * ch Pointer to channel struct to be released. + */ +static void channel_remove(struct channel *ch) +{ + struct channel **c = &channels; + char chid[CTCM_ID_SIZE+1]; + int ok = 0; + + if (ch == NULL) + return; + else + strncpy(chid, ch->id, CTCM_ID_SIZE); + + channel_free(ch); + while (*c) { + if (*c == ch) { + *c = ch->next; + fsm_deltimer(&ch->timer); + if (IS_MPC(ch)) + fsm_deltimer(&ch->sweep_timer); + + kfree_fsm(ch->fsm); + clear_normalized_cda(&ch->ccw[4]); + if (ch->trans_skb != NULL) { + clear_normalized_cda(&ch->ccw[1]); + dev_kfree_skb_any(ch->trans_skb); + } + if (IS_MPC(ch)) { + tasklet_kill(&ch->ch_tasklet); + tasklet_kill(&ch->ch_disc_tasklet); + kfree(ch->discontact_th); + } + kfree(ch->ccw); + kfree(ch->irb); + kfree(ch); + ok = 1; + break; + } + c = &((*c)->next); + } + + CTCM_DBF_TEXT_(SETUP, CTC_DBF_INFO, "%s(%s) %s", CTCM_FUNTAIL, + chid, ok ? "OK" : "failed"); +} + +/** + * Get a specific channel from the channel list. + * + * type Type of channel we are interested in. + * id Id of channel we are interested in. + * direction Direction we want to use this channel for. + * + * returns Pointer to a channel or NULL if no matching channel available. + */ +static struct channel *channel_get(enum channel_types type, + char *id, int direction) +{ + struct channel *ch = channels; + + if (do_debug) { + char buf[64]; + sprintf(buf, "%s(%d, %s, %d)\n", + CTCM_FUNTAIL, type, id, direction); + CTCM_DBF_TEXT(TRACE, CTC_DBF_INFO, buf); + } + while (ch && (strncmp(ch->id, id, CTCM_ID_SIZE) || (ch->type != type))) + ch = ch->next; + if (!ch) { + char buf[64]; + sprintf(buf, "%s(%d, %s, %d) not found in channel list\n", + CTCM_FUNTAIL, type, id, direction); + CTCM_DBF_TEXT(ERROR, CTC_DBF_ERROR, buf); + } else { + if (ch->flags & CHANNEL_FLAGS_INUSE) + ch = NULL; + else { + ch->flags |= CHANNEL_FLAGS_INUSE; + ch->flags &= ~CHANNEL_FLAGS_RWMASK; + ch->flags |= (direction == WRITE) + ? CHANNEL_FLAGS_WRITE : CHANNEL_FLAGS_READ; + fsm_newstate(ch->fsm, CTC_STATE_STOPPED); + } + } + return ch; +} + +static long ctcm_check_irb_error(struct ccw_device *cdev, struct irb *irb) +{ + if (!IS_ERR(irb)) + return 0; + + CTCM_DBF_TEXT_(ERROR, CTC_DBF_WARN, "irb error %ld on device %s\n", + PTR_ERR(irb), cdev->dev.bus_id); + + switch (PTR_ERR(irb)) { + case -EIO: + ctcm_pr_warn("i/o-error on device %s\n", cdev->dev.bus_id); + break; + case -ETIMEDOUT: + ctcm_pr_warn("timeout on device %s\n", cdev->dev.bus_id); + break; + default: + ctcm_pr_warn("unknown error %ld on device %s\n", + PTR_ERR(irb), cdev->dev.bus_id); + } + return PTR_ERR(irb); +} + + +/** + * Check sense of a unit check. + * + * ch The channel, the sense code belongs to. + * sense The sense code to inspect. + */ +static inline void ccw_unit_check(struct channel *ch, unsigned char sense) +{ + CTCM_DBF_TEXT(TRACE, 5, __FUNCTION__); + if (sense & SNS0_INTERVENTION_REQ) { + if (sense & 0x01) { + ctcm_pr_debug("%s: Interface disc. or Sel. reset " + "(remote)\n", ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_RCRESET, ch); + } else { + ctcm_pr_debug("%s: System reset (remote)\n", ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_RSRESET, ch); + } + } else if (sense & SNS0_EQUIPMENT_CHECK) { + if (sense & SNS0_BUS_OUT_CHECK) { + ctcm_pr_warn("%s: Hardware malfunction (remote)\n", + ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_HWFAIL, ch); + } else { + ctcm_pr_warn("%s: Read-data parity error (remote)\n", + ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_RXPARITY, ch); + } + } else if (sense & SNS0_BUS_OUT_CHECK) { + if (sense & 0x04) { + ctcm_pr_warn("%s: Data-streaming timeout)\n", ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_TXTIMEOUT, ch); + } else { + ctcm_pr_warn("%s: Data-transfer parity error\n", + ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_TXPARITY, ch); + } + } else if (sense & SNS0_CMD_REJECT) { + ctcm_pr_warn("%s: Command reject\n", ch->id); + } else if (sense == 0) { + ctcm_pr_debug("%s: Unit check ZERO\n", ch->id); + fsm_event(ch->fsm, CTC_EVENT_UC_ZERO, ch); + } else { + ctcm_pr_warn("%s: Unit Check with sense code: %02x\n", + ch->id, sense); + fsm_event(ch->fsm, CTC_EVENT_UC_UNKNOWN, ch); + } +} + +int ctcm_ch_alloc_buffer(struct channel *ch) +{ + CTCM_DBF_TEXT(TRACE, 5, __FUNCTION__); + + clear_normalized_cda(&ch->ccw[1]); + ch->trans_skb = __dev_alloc_skb(ch->max_bufsize, GFP_ATOMIC | GFP_DMA); + if (ch->trans_skb == NULL) { + ctcm_pr_warn("%s: Couldn't alloc %s trans_skb\n", + ch->id, + (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); + return -ENOMEM; + } + + ch->ccw[1].count = ch->max_bufsize; + if (set_normalized_cda(&ch->ccw[1], ch->trans_skb->data)) { + dev_kfree_skb(ch->trans_skb); + ch->trans_skb = NULL; + ctcm_pr_warn("%s: set_normalized_cda for %s " + "trans_skb failed, dropping packets\n", + ch->id, + (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); + return -ENOMEM; + } + + ch->ccw[1].count = 0; + ch->trans_skb_data = ch->trans_skb->data; + ch->flags &= ~CHANNEL_FLAGS_BUFSIZE_CHANGED; + return 0; +} + +/* + * Interface API for upper network layers + */ + +/** + * Open an interface. + * Called from generic network layer when ifconfig up is run. + * + * dev Pointer to interface struct. + * + * returns 0 on success, -ERRNO on failure. (Never fails.) + */ +int ctcm_open(struct net_device *dev) +{ + struct ctcm_priv *priv = dev->priv; + + CTCMY_DBF_DEV_NAME(SETUP, dev, ""); + if (!IS_MPC(priv)) + fsm_event(priv->fsm, DEV_EVENT_START, dev); + return 0; +} + +/** + * Close an interface. + * Called from generic network layer when ifconfig down is run. + * + * dev Pointer to interface struct. + * + * returns 0 on success, -ERRNO on failure. (Never fails.) + */ +int ctcm_close(struct net_device *dev) +{ + struct ctcm_priv *priv = dev->priv; + + CTCMY_DBF_DEV_NAME(SETUP, dev, ""); + if (!IS_MPC(priv)) + fsm_event(priv->fsm, DEV_EVENT_STOP, dev); + return 0; +} + + +/** + * Transmit a packet. + * This is a helper function for ctcm_tx(). + * + * ch Channel to be used for sending. + * skb Pointer to struct sk_buff of packet to send. + * The linklevel header has already been set up + * by ctcm_tx(). + * + * returns 0 on success, -ERRNO on failure. (Never fails.) + */ +static int ctcm_transmit_skb(struct channel *ch, struct sk_buff *skb) +{ + unsigned long saveflags; + struct ll_header header; + int rc = 0; + __u16 block_len; + int ccw_idx; + struct sk_buff *nskb; + unsigned long hi; + + /* we need to acquire the lock for testing the state + * otherwise we can have an IRQ changing the state to + * TXIDLE after the test but before acquiring the lock. + */ + spin_lock_irqsave(&ch->collect_lock, saveflags); + if (fsm_getstate(ch->fsm) != CTC_STATE_TXIDLE) { + int l = skb->len + LL_HEADER_LENGTH; + + if (ch->collect_len + l > ch->max_bufsize - 2) { + spin_unlock_irqrestore(&ch->collect_lock, saveflags); + return -EBUSY; + } else { + atomic_inc(&skb->users); + header.length = l; + header.type = skb->protocol; + header.unused = 0; + memcpy(skb_push(skb, LL_HEADER_LENGTH), &header, + LL_HEADER_LENGTH); + skb_queue_tail(&ch->collect_queue, skb); + ch->collect_len += l; + } + spin_unlock_irqrestore(&ch->collect_lock, saveflags); + goto done; + } + spin_unlock_irqrestore(&ch->collect_lock, saveflags); + /* + * Protect skb against beeing free'd by upper + * layers. + */ + atomic_inc(&skb->users); + ch->prof.txlen += skb->len; + header.length = skb->len + LL_HEADER_LENGTH; + header.type = skb->protocol; + header.unused = 0; + memcpy(skb_push(skb, LL_HEADER_LENGTH), &header, LL_HEADER_LENGTH); + block_len = skb->len + 2; + *((__u16 *)skb_push(skb, 2)) = block_len; + + /* + * IDAL support in CTCM is broken, so we have to + * care about skb's above 2G ourselves. + */ + hi = ((unsigned long)skb_tail_pointer(skb) + LL_HEADER_LENGTH) >> 31; + if (hi) { + nskb = alloc_skb(skb->len, GFP_ATOMIC | GFP_DMA); + if (!nskb) { + atomic_dec(&skb->users); + skb_pull(skb, LL_HEADER_LENGTH + 2); + ctcm_clear_busy(ch->netdev); + return -ENOMEM; + } else { + memcpy(skb_put(nskb, skb->len), skb->data, skb->len); + atomic_inc(&nskb->users); + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + skb = nskb; + } + } + + ch->ccw[4].count = block_len; + if (set_normalized_cda(&ch->ccw[4], skb->data)) { + /* + * idal allocation failed, try via copying to + * trans_skb. trans_skb usually has a pre-allocated + * idal. + */ + if (ctcm_checkalloc_buffer(ch)) { + /* + * Remove our header. It gets added + * again on retransmit. + */ + atomic_dec(&skb->users); + skb_pull(skb, LL_HEADER_LENGTH + 2); + ctcm_clear_busy(ch->netdev); + return -EBUSY; + } + + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + ch->ccw[1].count = skb->len; + skb_copy_from_linear_data(skb, + skb_put(ch->trans_skb, skb->len), skb->len); + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + ccw_idx = 0; + } else { + skb_queue_tail(&ch->io_queue, skb); + ccw_idx = 3; + } + ch->retry = 0; + fsm_newstate(ch->fsm, CTC_STATE_TX); + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + ch->prof.send_stamp = current_kernel_time(); /* xtime */ + rc = ccw_device_start(ch->cdev, &ch->ccw[ccw_idx], + (unsigned long)ch, 0xff, 0); + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + if (ccw_idx == 3) + ch->prof.doios_single++; + if (rc != 0) { + fsm_deltimer(&ch->timer); + ctcm_ccw_check_rc(ch, rc, "single skb TX"); + if (ccw_idx == 3) + skb_dequeue_tail(&ch->io_queue); + /* + * Remove our header. It gets added + * again on retransmit. + */ + skb_pull(skb, LL_HEADER_LENGTH + 2); + } else if (ccw_idx == 0) { + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + priv->stats.tx_packets++; + priv->stats.tx_bytes += skb->len - LL_HEADER_LENGTH; + } +done: + ctcm_clear_busy(ch->netdev); + return rc; +} + +static void ctcmpc_send_sweep_req(struct channel *rch) +{ + struct net_device *dev = rch->netdev; + struct ctcm_priv *priv; + struct mpc_group *grp; + struct th_sweep *header; + struct sk_buff *sweep_skb; + struct channel *ch; + int rc = 0; + + priv = dev->priv; + grp = priv->mpcg; + ch = priv->channel[WRITE]; + + if (do_debug) + MPC_DBF_DEV_NAME(TRACE, dev, ch->id); + + /* sweep processing is not complete until response and request */ + /* has completed for all read channels in group */ + if (grp->in_sweep == 0) { + grp->in_sweep = 1; + grp->sweep_rsp_pend_num = grp->active_channels[READ]; + grp->sweep_req_pend_num = grp->active_channels[READ]; + } + + sweep_skb = __dev_alloc_skb(MPC_BUFSIZE_DEFAULT, GFP_ATOMIC|GFP_DMA); + + if (sweep_skb == NULL) { + printk(KERN_INFO "Couldn't alloc sweep_skb\n"); + rc = -ENOMEM; + goto done; + } + + header = kmalloc(TH_SWEEP_LENGTH, gfp_type()); + + if (!header) { + dev_kfree_skb_any(sweep_skb); + rc = -ENOMEM; + goto done; + } + + header->th.th_seg = 0x00 ; + header->th.th_ch_flag = TH_SWEEP_REQ; /* 0x0f */ + header->th.th_blk_flag = 0x00; + header->th.th_is_xid = 0x00; + header->th.th_seq_num = 0x00; + header->sw.th_last_seq = ch->th_seq_num; + + memcpy(skb_put(sweep_skb, TH_SWEEP_LENGTH), header, TH_SWEEP_LENGTH); + + kfree(header); + + dev->trans_start = jiffies; + skb_queue_tail(&ch->sweep_queue, sweep_skb); + + fsm_addtimer(&ch->sweep_timer, 100, CTC_EVENT_RSWEEP_TIMER, ch); + + return; + +done: + if (rc != 0) { + grp->in_sweep = 0; + ctcm_clear_busy(dev); + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + } + + return; +} + +/* + * MPC mode version of transmit_skb + */ +static int ctcmpc_transmit_skb(struct channel *ch, struct sk_buff *skb) +{ + struct pdu *p_header; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct th_header *header; + struct sk_buff *nskb; + int rc = 0; + int ccw_idx; + unsigned long hi; + unsigned long saveflags = 0; /* avoids compiler warning */ + __u16 block_len; + + if (do_debug) + ctcm_pr_debug( + "ctcm enter: %s(): %s cp=%i ch=0x%p id=%s state=%s\n", + __FUNCTION__, dev->name, smp_processor_id(), ch, + ch->id, fsm_getstate_str(ch->fsm)); + + if ((fsm_getstate(ch->fsm) != CTC_STATE_TXIDLE) || grp->in_sweep) { + spin_lock_irqsave(&ch->collect_lock, saveflags); + atomic_inc(&skb->users); + p_header = kmalloc(PDU_HEADER_LENGTH, gfp_type()); + + if (!p_header) { + printk(KERN_WARNING "ctcm: OUT OF MEMORY IN %s():" + " Data Lost \n", __FUNCTION__); + + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + spin_unlock_irqrestore(&ch->collect_lock, saveflags); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + p_header->pdu_offset = skb->len; + p_header->pdu_proto = 0x01; + p_header->pdu_flag = 0x00; + if (skb->protocol == ntohs(ETH_P_SNAP)) { + p_header->pdu_flag |= PDU_FIRST | PDU_CNTL; + } else { + p_header->pdu_flag |= PDU_FIRST; + } + p_header->pdu_seq = 0; + memcpy(skb_push(skb, PDU_HEADER_LENGTH), p_header, + PDU_HEADER_LENGTH); + + if (do_debug_data) { + ctcm_pr_debug("ctcm: %s() Putting on collect_q" + " - skb len: %04x \n", __FUNCTION__, skb->len); + ctcm_pr_debug("ctcm: %s() pdu header and data" + " for up to 32 bytes\n", __FUNCTION__); + ctcmpc_dump32((char *)skb->data, skb->len); + } + + skb_queue_tail(&ch->collect_queue, skb); + ch->collect_len += skb->len; + kfree(p_header); + + spin_unlock_irqrestore(&ch->collect_lock, saveflags); + goto done; + } + + /* + * Protect skb against beeing free'd by upper + * layers. + */ + atomic_inc(&skb->users); + + block_len = skb->len + TH_HEADER_LENGTH + PDU_HEADER_LENGTH; + /* + * IDAL support in CTCM is broken, so we have to + * care about skb's above 2G ourselves. + */ + hi = ((unsigned long)skb->tail + TH_HEADER_LENGTH) >> 31; + if (hi) { + nskb = __dev_alloc_skb(skb->len, GFP_ATOMIC | GFP_DMA); + if (!nskb) { + printk(KERN_WARNING "ctcm: %s() OUT OF MEMORY" + "- Data Lost \n", __FUNCTION__); + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto done; + } else { + memcpy(skb_put(nskb, skb->len), skb->data, skb->len); + atomic_inc(&nskb->users); + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + skb = nskb; + } + } + + p_header = kmalloc(PDU_HEADER_LENGTH, gfp_type()); + + if (!p_header) { + printk(KERN_WARNING "ctcm: %s() OUT OF MEMORY" + ": Data Lost \n", __FUNCTION__); + + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + p_header->pdu_offset = skb->len; + p_header->pdu_proto = 0x01; + p_header->pdu_flag = 0x00; + p_header->pdu_seq = 0; + if (skb->protocol == ntohs(ETH_P_SNAP)) { + p_header->pdu_flag |= PDU_FIRST | PDU_CNTL; + } else { + p_header->pdu_flag |= PDU_FIRST; + } + memcpy(skb_push(skb, PDU_HEADER_LENGTH), p_header, PDU_HEADER_LENGTH); + + kfree(p_header); + + if (ch->collect_len > 0) { + spin_lock_irqsave(&ch->collect_lock, saveflags); + skb_queue_tail(&ch->collect_queue, skb); + ch->collect_len += skb->len; + skb = skb_dequeue(&ch->collect_queue); + ch->collect_len -= skb->len; + spin_unlock_irqrestore(&ch->collect_lock, saveflags); + } + + p_header = (struct pdu *)skb->data; + p_header->pdu_flag |= PDU_LAST; + + ch->prof.txlen += skb->len - PDU_HEADER_LENGTH; + + header = kmalloc(TH_HEADER_LENGTH, gfp_type()); + + if (!header) { + printk(KERN_WARNING "ctcm: %s() OUT OF MEMORY: Data Lost \n", + __FUNCTION__); + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + header->th_seg = 0x00; + header->th_ch_flag = TH_HAS_PDU; /* Normal data */ + header->th_blk_flag = 0x00; + header->th_is_xid = 0x00; /* Just data here */ + ch->th_seq_num++; + header->th_seq_num = ch->th_seq_num; + + if (do_debug_data) + ctcm_pr_debug("ctcm: %s() ToVTAM_th_seq= %08x\n" , + __FUNCTION__, ch->th_seq_num); + + /* put the TH on the packet */ + memcpy(skb_push(skb, TH_HEADER_LENGTH), header, TH_HEADER_LENGTH); + + kfree(header); + + if (do_debug_data) { + ctcm_pr_debug("ctcm: %s(): skb len: %04x \n", + __FUNCTION__, skb->len); + ctcm_pr_debug("ctcm: %s(): pdu header and data for up to 32 " + "bytes sent to vtam\n", __FUNCTION__); + ctcmpc_dump32((char *)skb->data, skb->len); + } + + ch->ccw[4].count = skb->len; + if (set_normalized_cda(&ch->ccw[4], skb->data)) { + /* + * idal allocation failed, try via copying to + * trans_skb. trans_skb usually has a pre-allocated + * idal. + */ + if (ctcm_checkalloc_buffer(ch)) { + /* + * Remove our header. It gets added + * again on retransmit. + */ + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + printk(KERN_WARNING "ctcm: %s()OUT OF MEMORY:" + " Data Lost \n", __FUNCTION__); + fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + ch->ccw[1].count = skb->len; + memcpy(skb_put(ch->trans_skb, skb->len), skb->data, skb->len); + atomic_dec(&skb->users); + dev_kfree_skb_irq(skb); + ccw_idx = 0; + if (do_debug_data) { + ctcm_pr_debug("ctcm: %s() TRANS skb len: %d \n", + __FUNCTION__, ch->trans_skb->len); + ctcm_pr_debug("ctcm: %s up to 32 bytes of data" + " sent to vtam\n", __FUNCTION__); + ctcmpc_dump32((char *)ch->trans_skb->data, + ch->trans_skb->len); + } + } else { + skb_queue_tail(&ch->io_queue, skb); + ccw_idx = 3; + } + ch->retry = 0; + fsm_newstate(ch->fsm, CTC_STATE_TX); + fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch); + + if (do_debug_ccw) + ctcmpc_dumpit((char *)&ch->ccw[ccw_idx], + sizeof(struct ccw1) * 3); + + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + ch->prof.send_stamp = current_kernel_time(); /* xtime */ + rc = ccw_device_start(ch->cdev, &ch->ccw[ccw_idx], + (unsigned long)ch, 0xff, 0); + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + if (ccw_idx == 3) + ch->prof.doios_single++; + if (rc != 0) { + fsm_deltimer(&ch->timer); + ctcm_ccw_check_rc(ch, rc, "single skb TX"); + if (ccw_idx == 3) + skb_dequeue_tail(&ch->io_queue); + } else if (ccw_idx == 0) { + priv->stats.tx_packets++; + priv->stats.tx_bytes += skb->len - TH_HEADER_LENGTH; + } + if (ch->th_seq_num > 0xf0000000) /* Chose 4Billion at random. */ + ctcmpc_send_sweep_req(ch); + +done: + if (do_debug) + ctcm_pr_debug("ctcm exit: %s %s()\n", dev->name, __FUNCTION__); + return 0; +} + +/** + * Start transmission of a packet. + * Called from generic network device layer. + * + * skb Pointer to buffer containing the packet. + * dev Pointer to interface struct. + * + * returns 0 if packet consumed, !0 if packet rejected. + * Note: If we return !0, then the packet is free'd by + * the generic network layer. + */ +/* first merge version - leaving both functions separated */ +static int ctcm_tx(struct sk_buff *skb, struct net_device *dev) +{ + int rc = 0; + struct ctcm_priv *priv; + + CTCM_DBF_TEXT(TRACE, 5, __FUNCTION__); + priv = dev->priv; + + if (skb == NULL) { + ctcm_pr_warn("%s: NULL sk_buff passed\n", dev->name); + priv->stats.tx_dropped++; + return 0; + } + if (skb_headroom(skb) < (LL_HEADER_LENGTH + 2)) { + ctcm_pr_warn("%s: Got sk_buff with head room < %ld bytes\n", + dev->name, LL_HEADER_LENGTH + 2); + dev_kfree_skb(skb); + priv->stats.tx_dropped++; + return 0; + } + + /* + * If channels are not running, try to restart them + * and throw away packet. + */ + if (fsm_getstate(priv->fsm) != DEV_STATE_RUNNING) { + fsm_event(priv->fsm, DEV_EVENT_START, dev); + dev_kfree_skb(skb); + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + priv->stats.tx_carrier_errors++; + return 0; + } + + if (ctcm_test_and_set_busy(dev)) + return -EBUSY; + + dev->trans_start = jiffies; + if (ctcm_transmit_skb(priv->channel[WRITE], skb) != 0) + rc = 1; + return rc; +} + +/* unmerged MPC variant of ctcm_tx */ +static int ctcmpc_tx(struct sk_buff *skb, struct net_device *dev) +{ + int len = 0; + struct ctcm_priv *priv = NULL; + struct mpc_group *grp = NULL; + struct sk_buff *newskb = NULL; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): skb:%0lx\n", + __FUNCTION__, (unsigned long)skb); + + CTCM_DBF_TEXT_(MPC_TRACE, CTC_DBF_DEBUG, + "ctcmpc enter: %s(): skb:%0lx\n", + __FUNCTION__, (unsigned long)skb); + + priv = dev->priv; + grp = priv->mpcg; + /* + * Some sanity checks ... + */ + if (skb == NULL) { + ctcm_pr_warn("ctcmpc: %s: NULL sk_buff passed\n", dev->name); + priv->stats.tx_dropped++; + goto done; + } + if (skb_headroom(skb) < (TH_HEADER_LENGTH + PDU_HEADER_LENGTH)) { + CTCM_DBF_TEXT_(MPC_TRACE, CTC_DBF_WARN, + "%s: Got sk_buff with head room < %ld bytes\n", + dev->name, TH_HEADER_LENGTH + PDU_HEADER_LENGTH); + + if (do_debug_data) + ctcmpc_dump32((char *)skb->data, skb->len); + + len = skb->len + TH_HEADER_LENGTH + PDU_HEADER_LENGTH; + newskb = __dev_alloc_skb(len, gfp_type() | GFP_DMA); + + if (!newskb) { + printk(KERN_WARNING "ctcmpc: %s() OUT OF MEMORY-" + "Data Lost\n", + __FUNCTION__); + + dev_kfree_skb_any(skb); + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + priv->stats.tx_carrier_errors++; + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + newskb->protocol = skb->protocol; + skb_reserve(newskb, TH_HEADER_LENGTH + PDU_HEADER_LENGTH); + memcpy(skb_put(newskb, skb->len), skb->data, skb->len); + dev_kfree_skb_any(skb); + skb = newskb; + } + + /* + * If channels are not running, + * notify anybody about a link failure and throw + * away packet. + */ + if ((fsm_getstate(priv->fsm) != DEV_STATE_RUNNING) || + (fsm_getstate(grp->fsm) < MPCG_STATE_XID2INITW)) { + dev_kfree_skb_any(skb); + printk(KERN_INFO "ctcmpc: %s() DATA RCVD - MPC GROUP " + "NOT ACTIVE - DROPPED\n", + __FUNCTION__); + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + priv->stats.tx_carrier_errors++; + goto done; + } + + if (ctcm_test_and_set_busy(dev)) { + printk(KERN_WARNING "%s:DEVICE ERR - UNRECOVERABLE DATA LOSS\n", + __FUNCTION__); + dev_kfree_skb_any(skb); + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + priv->stats.tx_carrier_errors++; + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + + dev->trans_start = jiffies; + if (ctcmpc_transmit_skb(priv->channel[WRITE], skb) != 0) { + printk(KERN_WARNING "ctcmpc: %s() DEVICE ERROR" + ": Data Lost \n", + __FUNCTION__); + printk(KERN_WARNING "ctcmpc: %s() DEVICE ERROR" + " - UNRECOVERABLE DATA LOSS\n", + __FUNCTION__); + dev_kfree_skb_any(skb); + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + priv->stats.tx_carrier_errors++; + ctcm_clear_busy(dev); + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + goto done; + } + ctcm_clear_busy(dev); +done: + if (do_debug) + MPC_DBF_DEV_NAME(TRACE, dev, "exit"); + + return 0; /* handle freeing of skb here */ +} + + +/** + * Sets MTU of an interface. + * + * dev Pointer to interface struct. + * new_mtu The new MTU to use for this interface. + * + * returns 0 on success, -EINVAL if MTU is out of valid range. + * (valid range is 576 .. 65527). If VM is on the + * remote side, maximum MTU is 32760, however this is + * not checked here. + */ +static int ctcm_change_mtu(struct net_device *dev, int new_mtu) +{ + struct ctcm_priv *priv; + int max_bufsize; + + CTCM_DBF_TEXT(SETUP, CTC_DBF_INFO, __FUNCTION__); + + if (new_mtu < 576 || new_mtu > 65527) + return -EINVAL; + + priv = dev->priv; + max_bufsize = priv->channel[READ]->max_bufsize; + + if (IS_MPC(priv)) { + if (new_mtu > max_bufsize - TH_HEADER_LENGTH) + return -EINVAL; + dev->hard_header_len = TH_HEADER_LENGTH + PDU_HEADER_LENGTH; + } else { + if (new_mtu > max_bufsize - LL_HEADER_LENGTH - 2) + return -EINVAL; + dev->hard_header_len = LL_HEADER_LENGTH + 2; + } + dev->mtu = new_mtu; + return 0; +} + +/** + * Returns interface statistics of a device. + * + * dev Pointer to interface struct. + * + * returns Pointer to stats struct of this interface. + */ +static struct net_device_stats *ctcm_stats(struct net_device *dev) +{ + return &((struct ctcm_priv *)dev->priv)->stats; +} + + +static void ctcm_netdev_unregister(struct net_device *dev) +{ + CTCM_DBF_TEXT(SETUP, CTC_DBF_INFO, __FUNCTION__); + if (!dev) + return; + unregister_netdev(dev); +} + +static int ctcm_netdev_register(struct net_device *dev) +{ + CTCM_DBF_TEXT(SETUP, CTC_DBF_INFO, __FUNCTION__); + return register_netdev(dev); +} + +static void ctcm_free_netdevice(struct net_device *dev) +{ + struct ctcm_priv *priv; + struct mpc_group *grp; + + CTCM_DBF_TEXT(SETUP, CTC_DBF_INFO, __FUNCTION__); + + if (!dev) + return; + priv = dev->priv; + if (priv) { + grp = priv->mpcg; + if (grp) { + if (grp->fsm) + kfree_fsm(grp->fsm); + if (grp->xid_skb) + dev_kfree_skb(grp->xid_skb); + if (grp->rcvd_xid_skb) + dev_kfree_skb(grp->rcvd_xid_skb); + tasklet_kill(&grp->mpc_tasklet2); + kfree(grp); + priv->mpcg = NULL; + } + if (priv->fsm) { + kfree_fsm(priv->fsm); + priv->fsm = NULL; + } + kfree(priv->xid); + priv->xid = NULL; + /* + * Note: kfree(priv); is done in "opposite" function of + * allocator function probe_device which is remove_device. + */ + } +#ifdef MODULE + free_netdev(dev); +#endif +} + +struct mpc_group *ctcmpc_init_mpc_group(struct ctcm_priv *priv); + +void static ctcm_dev_setup(struct net_device *dev) +{ + dev->open = ctcm_open; + dev->stop = ctcm_close; + dev->get_stats = ctcm_stats; + dev->change_mtu = ctcm_change_mtu; + dev->type = ARPHRD_SLIP; + dev->tx_queue_len = 100; + dev->flags = IFF_POINTOPOINT | IFF_NOARP; +} + +/* + * Initialize everything of the net device except the name and the + * channel structs. + */ +static struct net_device *ctcm_init_netdevice(struct ctcm_priv *priv) +{ + struct net_device *dev; + struct mpc_group *grp; + if (!priv) + return NULL; + + if (IS_MPC(priv)) + dev = alloc_netdev(0, MPC_DEVICE_GENE, ctcm_dev_setup); + else + dev = alloc_netdev(0, CTC_DEVICE_GENE, ctcm_dev_setup); + + if (!dev) { + ctcm_pr_err("%s: Out of memory\n", __FUNCTION__); + return NULL; + } + dev->priv = priv; + priv->fsm = init_fsm("ctcmdev", dev_state_names, dev_event_names, + CTCM_NR_DEV_STATES, CTCM_NR_DEV_EVENTS, + dev_fsm, dev_fsm_len, GFP_KERNEL); + if (priv->fsm == NULL) { + CTCMY_DBF_DEV(SETUP, dev, "init_fsm error"); + kfree(dev); + return NULL; + } + fsm_newstate(priv->fsm, DEV_STATE_STOPPED); + fsm_settimer(priv->fsm, &priv->restart_timer); + + if (IS_MPC(priv)) { + /* MPC Group Initializations */ + grp = ctcmpc_init_mpc_group(priv); + if (grp == NULL) { + MPC_DBF_DEV(SETUP, dev, "init_mpc_group error"); + kfree(dev); + return NULL; + } + tasklet_init(&grp->mpc_tasklet2, + mpc_group_ready, (unsigned long)dev); + dev->mtu = MPC_BUFSIZE_DEFAULT - + TH_HEADER_LENGTH - PDU_HEADER_LENGTH; + + dev->hard_start_xmit = ctcmpc_tx; + dev->hard_header_len = TH_HEADER_LENGTH + PDU_HEADER_LENGTH; + priv->buffer_size = MPC_BUFSIZE_DEFAULT; + } else { + dev->mtu = CTCM_BUFSIZE_DEFAULT - LL_HEADER_LENGTH - 2; + dev->hard_start_xmit = ctcm_tx; + dev->hard_header_len = LL_HEADER_LENGTH + 2; + } + + CTCMY_DBF_DEV(SETUP, dev, "finished"); + return dev; +} + +/** + * Main IRQ handler. + * + * cdev The ccw_device the interrupt is for. + * intparm interruption parameter. + * irb interruption response block. + */ +static void ctcm_irq_handler(struct ccw_device *cdev, + unsigned long intparm, struct irb *irb) +{ + struct channel *ch; + struct net_device *dev; + struct ctcm_priv *priv; + struct ccwgroup_device *cgdev; + + CTCM_DBF_TEXT(TRACE, CTC_DBF_DEBUG, __FUNCTION__); + if (ctcm_check_irb_error(cdev, irb)) + return; + + cgdev = dev_get_drvdata(&cdev->dev); + + /* Check for unsolicited interrupts. */ + if (cgdev == NULL) { + ctcm_pr_warn("ctcm: Got unsolicited irq: %s c-%02x d-%02x\n", + cdev->dev.bus_id, irb->scsw.cstat, + irb->scsw.dstat); + return; + } + + priv = dev_get_drvdata(&cgdev->dev); + + /* Try to extract channel from driver data. */ + if (priv->channel[READ]->cdev == cdev) + ch = priv->channel[READ]; + else if (priv->channel[WRITE]->cdev == cdev) + ch = priv->channel[WRITE]; + else { + ctcm_pr_err("ctcm: Can't determine channel for interrupt, " + "device %s\n", cdev->dev.bus_id); + return; + } + + dev = (struct net_device *)(ch->netdev); + if (dev == NULL) { + ctcm_pr_crit("ctcm: %s dev=NULL bus_id=%s, ch=0x%p\n", + __FUNCTION__, cdev->dev.bus_id, ch); + return; + } + + if (do_debug) + ctcm_pr_debug("%s: interrupt for device: %s " + "received c-%02x d-%02x\n", + dev->name, + ch->id, + irb->scsw.cstat, + irb->scsw.dstat); + + /* Copy interruption response block. */ + memcpy(ch->irb, irb, sizeof(struct irb)); + + /* Check for good subchannel return code, otherwise error message */ + if (irb->scsw.cstat) { + fsm_event(ch->fsm, CTC_EVENT_SC_UNKNOWN, ch); + ctcm_pr_warn("%s: subchannel check for dev: %s - %02x %02x\n", + dev->name, ch->id, irb->scsw.cstat, + irb->scsw.dstat); + return; + } + + /* Check the reason-code of a unit check */ + if (irb->scsw.dstat & DEV_STAT_UNIT_CHECK) { + ccw_unit_check(ch, irb->ecw[0]); + return; + } + if (irb->scsw.dstat & DEV_STAT_BUSY) { + if (irb->scsw.dstat & DEV_STAT_ATTENTION) + fsm_event(ch->fsm, CTC_EVENT_ATTNBUSY, ch); + else + fsm_event(ch->fsm, CTC_EVENT_BUSY, ch); + return; + } + if (irb->scsw.dstat & DEV_STAT_ATTENTION) { + fsm_event(ch->fsm, CTC_EVENT_ATTN, ch); + return; + } + if ((irb->scsw.stctl & SCSW_STCTL_SEC_STATUS) || + (irb->scsw.stctl == SCSW_STCTL_STATUS_PEND) || + (irb->scsw.stctl == + (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND))) + fsm_event(ch->fsm, CTC_EVENT_FINSTAT, ch); + else + fsm_event(ch->fsm, CTC_EVENT_IRQ, ch); + +} + +/** + * Add ctcm specific attributes. + * Add ctcm private data. + * + * cgdev pointer to ccwgroup_device just added + * + * returns 0 on success, !0 on failure. + */ +static int ctcm_probe_device(struct ccwgroup_device *cgdev) +{ + struct ctcm_priv *priv; + int rc; + + CTCM_DBF_TEXT_(SETUP, CTC_DBF_INFO, "%s %p", __FUNCTION__, cgdev); + + if (!get_device(&cgdev->dev)) + return -ENODEV; + + priv = kzalloc(sizeof(struct ctcm_priv), GFP_KERNEL); + if (!priv) { + ctcm_pr_err("%s: Out of memory\n", __FUNCTION__); + put_device(&cgdev->dev); + return -ENOMEM; + } + + rc = ctcm_add_files(&cgdev->dev); + if (rc) { + kfree(priv); + put_device(&cgdev->dev); + return rc; + } + priv->buffer_size = CTCM_BUFSIZE_DEFAULT; + cgdev->cdev[0]->handler = ctcm_irq_handler; + cgdev->cdev[1]->handler = ctcm_irq_handler; + dev_set_drvdata(&cgdev->dev, priv); + + return 0; +} + +/** + * Add a new channel to the list of channels. + * Keeps the channel list sorted. + * + * cdev The ccw_device to be added. + * type The type class of the new channel. + * priv Points to the private data of the ccwgroup_device. + * + * returns 0 on success, !0 on error. + */ +static int add_channel(struct ccw_device *cdev, enum channel_types type, + struct ctcm_priv *priv) +{ + struct channel **c = &channels; + struct channel *ch; + int ccw_num; + int rc = 0; + + CTCM_DBF_TEXT(TRACE, 2, __FUNCTION__); + ch = kzalloc(sizeof(struct channel), GFP_KERNEL); + if (ch == NULL) + goto nomem_return; + + ch->protocol = priv->protocol; + if (IS_MPC(priv)) { + ch->discontact_th = (struct th_header *) + kzalloc(TH_HEADER_LENGTH, gfp_type()); + if (ch->discontact_th == NULL) + goto nomem_return; + + ch->discontact_th->th_blk_flag = TH_DISCONTACT; + tasklet_init(&ch->ch_disc_tasklet, + mpc_action_send_discontact, (unsigned long)ch); + + tasklet_init(&ch->ch_tasklet, ctcmpc_bh, (unsigned long)ch); + ch->max_bufsize = (MPC_BUFSIZE_DEFAULT - 35); + ccw_num = 17; + } else + ccw_num = 8; + + ch->ccw = (struct ccw1 *) + kzalloc(ccw_num * sizeof(struct ccw1), GFP_KERNEL | GFP_DMA); + if (ch->ccw == NULL) + goto nomem_return; + + ch->cdev = cdev; + snprintf(ch->id, CTCM_ID_SIZE, "ch-%s", cdev->dev.bus_id); + ch->type = type; + + /** + * "static" ccws are used in the following way: + * + * ccw[0..2] (Channel program for generic I/O): + * 0: prepare + * 1: read or write (depending on direction) with fixed + * buffer (idal allocated once when buffer is allocated) + * 2: nop + * ccw[3..5] (Channel program for direct write of packets) + * 3: prepare + * 4: write (idal allocated on every write). + * 5: nop + * ccw[6..7] (Channel program for initial channel setup): + * 6: set extended mode + * 7: nop + * + * ch->ccw[0..5] are initialized in ch_action_start because + * the channel's direction is yet unknown here. + * + * ccws used for xid2 negotiations + * ch-ccw[8-14] need to be used for the XID exchange either + * X side XID2 Processing + * 8: write control + * 9: write th + * 10: write XID + * 11: read th from secondary + * 12: read XID from secondary + * 13: read 4 byte ID + * 14: nop + * Y side XID Processing + * 8: sense + * 9: read th + * 10: read XID + * 11: write th + * 12: write XID + * 13: write 4 byte ID + * 14: nop + * + * ccws used for double noop due to VM timing issues + * which result in unrecoverable Busy on channel + * 15: nop + * 16: nop + */ + ch->ccw[6].cmd_code = CCW_CMD_SET_EXTENDED; + ch->ccw[6].flags = CCW_FLAG_SLI; + + ch->ccw[7].cmd_code = CCW_CMD_NOOP; + ch->ccw[7].flags = CCW_FLAG_SLI; + + if (IS_MPC(priv)) { + ch->ccw[15].cmd_code = CCW_CMD_WRITE; + ch->ccw[15].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[15].count = TH_HEADER_LENGTH; + ch->ccw[15].cda = virt_to_phys(ch->discontact_th); + + ch->ccw[16].cmd_code = CCW_CMD_NOOP; + ch->ccw[16].flags = CCW_FLAG_SLI; + + ch->fsm = init_fsm(ch->id, ctc_ch_state_names, + ctc_ch_event_names, CTC_MPC_NR_STATES, + CTC_MPC_NR_EVENTS, ctcmpc_ch_fsm, + mpc_ch_fsm_len, GFP_KERNEL); + } else { + ch->fsm = init_fsm(ch->id, ctc_ch_state_names, + ctc_ch_event_names, CTC_NR_STATES, + CTC_NR_EVENTS, ch_fsm, + ch_fsm_len, GFP_KERNEL); + } + if (ch->fsm == NULL) + goto free_return; + + fsm_newstate(ch->fsm, CTC_STATE_IDLE); + + ch->irb = kzalloc(sizeof(struct irb), GFP_KERNEL); + if (ch->irb == NULL) + goto nomem_return; + + while (*c && ctcm_less_than((*c)->id, ch->id)) + c = &(*c)->next; + + if (*c && (!strncmp((*c)->id, ch->id, CTCM_ID_SIZE))) { + CTCM_DBF_TEXT_(SETUP, CTC_DBF_INFO, + "%s (%s) already in list, using old entry", + __FUNCTION__, (*c)->id); + + goto free_return; + } + + spin_lock_init(&ch->collect_lock); + + fsm_settimer(ch->fsm, &ch->timer); + skb_queue_head_init(&ch->io_queue); + skb_queue_head_init(&ch->collect_queue); + + if (IS_MPC(priv)) { + fsm_settimer(ch->fsm, &ch->sweep_timer); + skb_queue_head_init(&ch->sweep_queue); + } + ch->next = *c; + *c = ch; + return 0; + +nomem_return: + ctcm_pr_warn("ctcm: Out of memory in %s\n", __FUNCTION__); + rc = -ENOMEM; + +free_return: /* note that all channel pointers are 0 or valid */ + kfree(ch->ccw); /* TODO: check that again */ + kfree(ch->discontact_th); + kfree_fsm(ch->fsm); + kfree(ch->irb); + kfree(ch); + return rc; +} + +/* + * Return type of a detected device. + */ +static enum channel_types get_channel_type(struct ccw_device_id *id) +{ + enum channel_types type; + type = (enum channel_types)id->driver_info; + + if (type == channel_type_ficon) + type = channel_type_escon; + + return type; +} + +/** + * + * Setup an interface. + * + * cgdev Device to be setup. + * + * returns 0 on success, !0 on failure. + */ +static int ctcm_new_device(struct ccwgroup_device *cgdev) +{ + char read_id[CTCM_ID_SIZE]; + char write_id[CTCM_ID_SIZE]; + int direction; + enum channel_types type; + struct ctcm_priv *priv; + struct net_device *dev; + int ret; + + CTCM_DBF_TEXT(SETUP, CTC_DBF_INFO, __FUNCTION__); + + priv = dev_get_drvdata(&cgdev->dev); + if (!priv) + return -ENODEV; + + type = get_channel_type(&cgdev->cdev[0]->id); + + snprintf(read_id, CTCM_ID_SIZE, "ch-%s", cgdev->cdev[0]->dev.bus_id); + snprintf(write_id, CTCM_ID_SIZE, "ch-%s", cgdev->cdev[1]->dev.bus_id); + + ret = add_channel(cgdev->cdev[0], type, priv); + if (ret) + return ret; + ret = add_channel(cgdev->cdev[1], type, priv); + if (ret) + return ret; + + ret = ccw_device_set_online(cgdev->cdev[0]); + if (ret != 0) { + CTCM_DBF_TEXT(SETUP, CTC_DBF_WARN, + "ccw_device_set_online (cdev[0]) failed "); + ctcm_pr_warn("ccw_device_set_online (cdev[0]) failed " + "with ret = %d\n", ret); + } + + ret = ccw_device_set_online(cgdev->cdev[1]); + if (ret != 0) { + CTCM_DBF_TEXT(SETUP, CTC_DBF_WARN, + "ccw_device_set_online (cdev[1]) failed "); + ctcm_pr_warn("ccw_device_set_online (cdev[1]) failed " + "with ret = %d\n", ret); + } + + dev = ctcm_init_netdevice(priv); + + if (dev == NULL) { + ctcm_pr_warn("ctcm_init_netdevice failed\n"); + goto out; + } + + for (direction = READ; direction <= WRITE; direction++) { + priv->channel[direction] = + channel_get(type, direction == READ ? read_id : write_id, + direction); + if (priv->channel[direction] == NULL) { + if (direction == WRITE) + channel_free(priv->channel[READ]); + ctcm_free_netdevice(dev); + goto out; + } + priv->channel[direction]->netdev = dev; + priv->channel[direction]->protocol = priv->protocol; + priv->channel[direction]->max_bufsize = priv->buffer_size; + } + /* sysfs magic */ + SET_NETDEV_DEV(dev, &cgdev->dev); + + if (ctcm_netdev_register(dev) != 0) { + ctcm_free_netdevice(dev); + goto out; + } + + if (ctcm_add_attributes(&cgdev->dev)) { + ctcm_netdev_unregister(dev); +/* dev->priv = NULL; why that ???? */ + ctcm_free_netdevice(dev); + goto out; + } + + strlcpy(priv->fsm->name, dev->name, sizeof(priv->fsm->name)); + + CTCM_DBF_TEXT_(SETUP, CTC_DBF_INFO, + "setup(%s) ok : r/w = %s / %s, proto : %d", + dev->name, priv->channel[READ]->id, + priv->channel[WRITE]->id, priv->protocol); + + return 0; +out: + ccw_device_set_offline(cgdev->cdev[1]); + ccw_device_set_offline(cgdev->cdev[0]); + + return -ENODEV; +} + +/** + * Shutdown an interface. + * + * cgdev Device to be shut down. + * + * returns 0 on success, !0 on failure. + */ +static int ctcm_shutdown_device(struct ccwgroup_device *cgdev) +{ + struct ctcm_priv *priv; + struct net_device *dev; + + priv = dev_get_drvdata(&cgdev->dev); + if (!priv) + return -ENODEV; + + if (priv->channel[READ]) { + dev = priv->channel[READ]->netdev; + CTCM_DBF_DEV(SETUP, dev, ""); + /* Close the device */ + ctcm_close(dev); + dev->flags &= ~IFF_RUNNING; + ctcm_remove_attributes(&cgdev->dev); + channel_free(priv->channel[READ]); + } else + dev = NULL; + + if (priv->channel[WRITE]) + channel_free(priv->channel[WRITE]); + + if (dev) { + ctcm_netdev_unregister(dev); +/* dev->priv = NULL; why that ??? */ + ctcm_free_netdevice(dev); + } + + if (priv->fsm) + kfree_fsm(priv->fsm); + + ccw_device_set_offline(cgdev->cdev[1]); + ccw_device_set_offline(cgdev->cdev[0]); + + if (priv->channel[READ]) + channel_remove(priv->channel[READ]); + if (priv->channel[WRITE]) + channel_remove(priv->channel[WRITE]); + priv->channel[READ] = priv->channel[WRITE] = NULL; + + return 0; + +} + + +static void ctcm_remove_device(struct ccwgroup_device *cgdev) +{ + struct ctcm_priv *priv; + + CTCM_DBF_TEXT(SETUP, CTC_DBF_ERROR, __FUNCTION__); + + priv = dev_get_drvdata(&cgdev->dev); + if (!priv) + return; + if (cgdev->state == CCWGROUP_ONLINE) + ctcm_shutdown_device(cgdev); + ctcm_remove_files(&cgdev->dev); + dev_set_drvdata(&cgdev->dev, NULL); + kfree(priv); + put_device(&cgdev->dev); +} + +static struct ccwgroup_driver ctcm_group_driver = { + .owner = THIS_MODULE, + .name = CTC_DRIVER_NAME, + .max_slaves = 2, + .driver_id = 0xC3E3C3D4, /* CTCM */ + .probe = ctcm_probe_device, + .remove = ctcm_remove_device, + .set_online = ctcm_new_device, + .set_offline = ctcm_shutdown_device, +}; + + +/* + * Module related routines + */ + +/* + * Prepare to be unloaded. Free IRQ's and release all resources. + * This is called just before this module is unloaded. It is + * not called, if the usage count is !0, so we don't need to check + * for that. + */ +static void __exit ctcm_exit(void) +{ + unregister_cu3088_discipline(&ctcm_group_driver); + ctcm_unregister_dbf_views(); + ctcm_pr_info("CTCM driver unloaded\n"); +} + +/* + * Print Banner. + */ +static void print_banner(void) +{ + printk(KERN_INFO "CTCM driver initialized\n"); +} + +/** + * Initialize module. + * This is called just after the module is loaded. + * + * returns 0 on success, !0 on error. + */ +static int __init ctcm_init(void) +{ + int ret; + + channels = NULL; + + ret = ctcm_register_dbf_views(); + if (ret) { + ctcm_pr_crit("ctcm_init failed with ctcm_register_dbf_views " + "rc = %d\n", ret); + return ret; + } + ret = register_cu3088_discipline(&ctcm_group_driver); + if (ret) { + ctcm_unregister_dbf_views(); + ctcm_pr_crit("ctcm_init failed with register_cu3088_discipline " + "(rc = %d)\n", ret); + return ret; + } + print_banner(); + return ret; +} + +module_init(ctcm_init); +module_exit(ctcm_exit); + +MODULE_AUTHOR("Peter Tiedemann "); +MODULE_DESCRIPTION("Network driver for S/390 CTC + CTCMPC (SNA)"); +MODULE_LICENSE("GPL"); + diff --git a/drivers/s390/net/ctcm_main.h b/drivers/s390/net/ctcm_main.h new file mode 100644 index 000000000000..95b0c0b6ebc6 --- /dev/null +++ b/drivers/s390/net/ctcm_main.h @@ -0,0 +1,287 @@ +/* + * drivers/s390/net/ctcm_main.h + * + * Copyright IBM Corp. 2001, 2007 + * Authors: Fritz Elfert (felfert@millenux.com) + * Peter Tiedemann (ptiedem@de.ibm.com) + */ + +#ifndef _CTCM_MAIN_H_ +#define _CTCM_MAIN_H_ + +#include +#include + +#include +#include + +#include "fsm.h" +#include "cu3088.h" +#include "ctcm_dbug.h" +#include "ctcm_mpc.h" + +#define CTC_DRIVER_NAME "ctcm" +#define CTC_DEVICE_NAME "ctc" +#define CTC_DEVICE_GENE "ctc%d" +#define MPC_DEVICE_NAME "mpc" +#define MPC_DEVICE_GENE "mpc%d" + +#define CHANNEL_FLAGS_READ 0 +#define CHANNEL_FLAGS_WRITE 1 +#define CHANNEL_FLAGS_INUSE 2 +#define CHANNEL_FLAGS_BUFSIZE_CHANGED 4 +#define CHANNEL_FLAGS_FAILED 8 +#define CHANNEL_FLAGS_WAITIRQ 16 +#define CHANNEL_FLAGS_RWMASK 1 +#define CHANNEL_DIRECTION(f) (f & CHANNEL_FLAGS_RWMASK) + +#define LOG_FLAG_ILLEGALPKT 1 +#define LOG_FLAG_ILLEGALSIZE 2 +#define LOG_FLAG_OVERRUN 4 +#define LOG_FLAG_NOMEM 8 + +#define ctcm_pr_debug(fmt, arg...) printk(KERN_DEBUG fmt, ##arg) +#define ctcm_pr_info(fmt, arg...) printk(KERN_INFO fmt, ##arg) +#define ctcm_pr_notice(fmt, arg...) printk(KERN_NOTICE fmt, ##arg) +#define ctcm_pr_warn(fmt, arg...) printk(KERN_WARNING fmt, ##arg) +#define ctcm_pr_emerg(fmt, arg...) printk(KERN_EMERG fmt, ##arg) +#define ctcm_pr_err(fmt, arg...) printk(KERN_ERR fmt, ##arg) +#define ctcm_pr_crit(fmt, arg...) printk(KERN_CRIT fmt, ##arg) + +/* + * CCW commands, used in this driver. + */ +#define CCW_CMD_WRITE 0x01 +#define CCW_CMD_READ 0x02 +#define CCW_CMD_NOOP 0x03 +#define CCW_CMD_TIC 0x08 +#define CCW_CMD_SENSE_CMD 0x14 +#define CCW_CMD_WRITE_CTL 0x17 +#define CCW_CMD_SET_EXTENDED 0xc3 +#define CCW_CMD_PREPARE 0xe3 + +#define CTCM_PROTO_S390 0 +#define CTCM_PROTO_LINUX 1 +#define CTCM_PROTO_LINUX_TTY 2 +#define CTCM_PROTO_OS390 3 +#define CTCM_PROTO_MPC 4 +#define CTCM_PROTO_MAX 4 + +#define CTCM_BUFSIZE_LIMIT 65535 +#define CTCM_BUFSIZE_DEFAULT 32768 +#define MPC_BUFSIZE_DEFAULT CTCM_BUFSIZE_LIMIT + +#define CTCM_TIME_1_SEC 1000 +#define CTCM_TIME_5_SEC 5000 +#define CTCM_TIME_10_SEC 10000 + +#define CTCM_INITIAL_BLOCKLEN 2 + +#define READ 0 +#define WRITE 1 + +#define CTCM_ID_SIZE BUS_ID_SIZE+3 + +struct ctcm_profile { + unsigned long maxmulti; + unsigned long maxcqueue; + unsigned long doios_single; + unsigned long doios_multi; + unsigned long txlen; + unsigned long tx_time; + struct timespec send_stamp; +}; + +/* + * Definition of one channel + */ +struct channel { + struct channel *next; + char id[CTCM_ID_SIZE]; + struct ccw_device *cdev; + /* + * Type of this channel. + * CTC/A or Escon for valid channels. + */ + enum channel_types type; + /* + * Misc. flags. See CHANNEL_FLAGS_... below + */ + __u32 flags; + __u16 protocol; /* protocol of this channel (4 = MPC) */ + /* + * I/O and irq related stuff + */ + struct ccw1 *ccw; + struct irb *irb; + /* + * RX/TX buffer size + */ + int max_bufsize; + struct sk_buff *trans_skb; /* transmit/receive buffer */ + struct sk_buff_head io_queue; /* universal I/O queue */ + struct tasklet_struct ch_tasklet; /* MPC ONLY */ + /* + * TX queue for collecting skb's during busy. + */ + struct sk_buff_head collect_queue; + /* + * Amount of data in collect_queue. + */ + int collect_len; + /* + * spinlock for collect_queue and collect_len + */ + spinlock_t collect_lock; + /* + * Timer for detecting unresposive + * I/O operations. + */ + fsm_timer timer; + /* MPC ONLY section begin */ + __u32 th_seq_num; /* SNA TH seq number */ + __u8 th_seg; + __u32 pdu_seq; + struct sk_buff *xid_skb; + char *xid_skb_data; + struct th_header *xid_th; + struct xid2 *xid; + char *xid_id; + struct th_header *rcvd_xid_th; + struct xid2 *rcvd_xid; + char *rcvd_xid_id; + __u8 in_mpcgroup; + fsm_timer sweep_timer; + struct sk_buff_head sweep_queue; + struct th_header *discontact_th; + struct tasklet_struct ch_disc_tasklet; + /* MPC ONLY section end */ + + int retry; /* retry counter for misc. operations */ + fsm_instance *fsm; /* finite state machine of this channel */ + struct net_device *netdev; /* corresponding net_device */ + struct ctcm_profile prof; + unsigned char *trans_skb_data; + __u16 logflags; +}; + +struct ctcm_priv { + struct net_device_stats stats; + unsigned long tbusy; + + /* The MPC group struct of this interface */ + struct mpc_group *mpcg; /* MPC only */ + struct xid2 *xid; /* MPC only */ + + /* The finite state machine of this interface */ + fsm_instance *fsm; + + /* The protocol of this device */ + __u16 protocol; + + /* Timer for restarting after I/O Errors */ + fsm_timer restart_timer; + + int buffer_size; /* ctc only */ + + struct channel *channel[2]; +}; + +int ctcm_open(struct net_device *dev); +int ctcm_close(struct net_device *dev); + +/* + * prototypes for non-static sysfs functions + */ +int ctcm_add_attributes(struct device *dev); +void ctcm_remove_attributes(struct device *dev); +int ctcm_add_files(struct device *dev); +void ctcm_remove_files(struct device *dev); + +/* + * Compatibility macros for busy handling + * of network devices. + */ +static inline void ctcm_clear_busy_do(struct net_device *dev) +{ + clear_bit(0, &(((struct ctcm_priv *)dev->priv)->tbusy)); + netif_wake_queue(dev); +} + +static inline void ctcm_clear_busy(struct net_device *dev) +{ + struct mpc_group *grp; + grp = ((struct ctcm_priv *)dev->priv)->mpcg; + + if (!(grp && grp->in_sweep)) + ctcm_clear_busy_do(dev); +} + + +static inline int ctcm_test_and_set_busy(struct net_device *dev) +{ + netif_stop_queue(dev); + return test_and_set_bit(0, &(((struct ctcm_priv *)dev->priv)->tbusy)); +} + +extern int loglevel; +extern struct channel *channels; + +void ctcm_unpack_skb(struct channel *ch, struct sk_buff *pskb); + +/* + * Functions related to setup and device detection. + */ + +static inline int ctcm_less_than(char *id1, char *id2) +{ + unsigned long dev1, dev2; + + id1 = id1 + 5; + id2 = id2 + 5; + + dev1 = simple_strtoul(id1, &id1, 16); + dev2 = simple_strtoul(id2, &id2, 16); + + return (dev1 < dev2); +} + +int ctcm_ch_alloc_buffer(struct channel *ch); + +static inline int ctcm_checkalloc_buffer(struct channel *ch) +{ + if (ch->trans_skb == NULL) + return ctcm_ch_alloc_buffer(ch); + if (ch->flags & CHANNEL_FLAGS_BUFSIZE_CHANGED) { + dev_kfree_skb(ch->trans_skb); + return ctcm_ch_alloc_buffer(ch); + } + return 0; +} + +struct mpc_group *ctcmpc_init_mpc_group(struct ctcm_priv *priv); + +/* test if protocol attribute (of struct ctcm_priv or struct channel) + * has MPC protocol setting. Type is not checked + */ +#define IS_MPC(p) ((p)->protocol == CTCM_PROTO_MPC) + +/* test if struct ctcm_priv of struct net_device has MPC protocol setting */ +#define IS_MPCDEV(d) IS_MPC((struct ctcm_priv *)d->priv) + +static inline gfp_t gfp_type(void) +{ + return in_interrupt() ? GFP_ATOMIC : GFP_KERNEL; +} + +/* + * Definition of our link level header. + */ +struct ll_header { + __u16 length; + __u16 type; + __u16 unused; +}; +#define LL_HEADER_LENGTH (sizeof(struct ll_header)) + +#endif diff --git a/drivers/s390/net/ctcm_mpc.c b/drivers/s390/net/ctcm_mpc.c new file mode 100644 index 000000000000..044addee64a2 --- /dev/null +++ b/drivers/s390/net/ctcm_mpc.c @@ -0,0 +1,2472 @@ +/* + * drivers/s390/net/ctcm_mpc.c + * + * Copyright IBM Corp. 2004, 2007 + * Authors: Belinda Thompson (belindat@us.ibm.com) + * Andy Richter (richtera@us.ibm.com) + * Peter Tiedemann (ptiedem@de.ibm.com) + */ + +/* + This module exports functions to be used by CCS: + EXPORT_SYMBOL(ctc_mpc_alloc_channel); + EXPORT_SYMBOL(ctc_mpc_establish_connectivity); + EXPORT_SYMBOL(ctc_mpc_dealloc_ch); + EXPORT_SYMBOL(ctc_mpc_flow_control); +*/ + +#undef DEBUG +#undef DEBUGDATA +#undef DEBUGCCW + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include /* instead of ok ? */ +#include +#include +#include /* instead of ok ? */ +#include /* instead of ok ? */ +#include +#include +#include + +#include "cu3088.h" +#include "ctcm_mpc.h" +#include "ctcm_main.h" +#include "ctcm_fsms.h" + +static const struct xid2 init_xid = { + .xid2_type_id = XID_FM2, + .xid2_len = 0x45, + .xid2_adj_id = 0, + .xid2_rlen = 0x31, + .xid2_resv1 = 0, + .xid2_flag1 = 0, + .xid2_fmtt = 0, + .xid2_flag4 = 0x80, + .xid2_resv2 = 0, + .xid2_tgnum = 0, + .xid2_sender_id = 0, + .xid2_flag2 = 0, + .xid2_option = XID2_0, + .xid2_resv3 = "\x00", + .xid2_resv4 = 0, + .xid2_dlc_type = XID2_READ_SIDE, + .xid2_resv5 = 0, + .xid2_mpc_flag = 0, + .xid2_resv6 = 0, + .xid2_buf_len = (MPC_BUFSIZE_DEFAULT - 35), +}; + +static const struct th_header thnorm = { + .th_seg = 0x00, + .th_ch_flag = TH_IS_XID, + .th_blk_flag = TH_DATA_IS_XID, + .th_is_xid = 0x01, + .th_seq_num = 0x00000000, +}; + +static const struct th_header thdummy = { + .th_seg = 0x00, + .th_ch_flag = 0x00, + .th_blk_flag = TH_DATA_IS_XID, + .th_is_xid = 0x01, + .th_seq_num = 0x00000000, +}; + +/* + * Definition of one MPC group + */ + +/* + * Compatibility macros for busy handling + * of network devices. + */ + +static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb); + +/* + * MPC Group state machine actions (static prototypes) + */ +static void mpc_action_nop(fsm_instance *fsm, int event, void *arg); +static void mpc_action_go_ready(fsm_instance *fsm, int event, void *arg); +static void mpc_action_go_inop(fsm_instance *fi, int event, void *arg); +static void mpc_action_timeout(fsm_instance *fi, int event, void *arg); +static int mpc_validate_xid(struct mpcg_info *mpcginfo); +static void mpc_action_yside_xid(fsm_instance *fsm, int event, void *arg); +static void mpc_action_doxid0(fsm_instance *fsm, int event, void *arg); +static void mpc_action_doxid7(fsm_instance *fsm, int event, void *arg); +static void mpc_action_xside_xid(fsm_instance *fsm, int event, void *arg); +static void mpc_action_rcvd_xid0(fsm_instance *fsm, int event, void *arg); +static void mpc_action_rcvd_xid7(fsm_instance *fsm, int event, void *arg); + +#ifdef DEBUGDATA +/*-------------------------------------------------------------------* +* Dump buffer format * +* * +*--------------------------------------------------------------------*/ +void ctcmpc_dumpit(char *buf, int len) +{ + __u32 ct, sw, rm, dup; + char *ptr, *rptr; + char tbuf[82], tdup[82]; + #if (UTS_MACHINE == s390x) + char addr[22]; + #else + char addr[12]; + #endif + char boff[12]; + char bhex[82], duphex[82]; + char basc[40]; + + sw = 0; + rptr = ptr = buf; + rm = 16; + duphex[0] = 0x00; + dup = 0; + + for (ct = 0; ct < len; ct++, ptr++, rptr++) { + if (sw == 0) { + #if (UTS_MACHINE == s390x) + sprintf(addr, "%16.16lx", (unsigned long)rptr); + #else + sprintf(addr, "%8.8X", (__u32)rptr); + #endif + + sprintf(boff, "%4.4X", (__u32)ct); + bhex[0] = '\0'; + basc[0] = '\0'; + } + if ((sw == 4) || (sw == 12)) + strcat(bhex, " "); + if (sw == 8) + strcat(bhex, " "); + + #if (UTS_MACHINE == s390x) + sprintf(tbuf, "%2.2lX", (unsigned long)*ptr); + #else + sprintf(tbuf, "%2.2X", (__u32)*ptr); + #endif + + tbuf[2] = '\0'; + strcat(bhex, tbuf); + if ((0 != isprint(*ptr)) && (*ptr >= 0x20)) + basc[sw] = *ptr; + else + basc[sw] = '.'; + + basc[sw+1] = '\0'; + sw++; + rm--; + if (sw == 16) { + if ((strcmp(duphex, bhex)) != 0) { + if (dup != 0) { + sprintf(tdup, "Duplicate as above " + "to %s", addr); + printk(KERN_INFO " " + " --- %s ---\n", tdup); + } + printk(KERN_INFO " %s (+%s) : %s [%s]\n", + addr, boff, bhex, basc); + dup = 0; + strcpy(duphex, bhex); + } else + dup++; + + sw = 0; + rm = 16; + } + } /* endfor */ + + if (sw != 0) { + for ( ; rm > 0; rm--, sw++) { + if ((sw == 4) || (sw == 12)) + strcat(bhex, " "); + if (sw == 8) + strcat(bhex, " "); + strcat(bhex, " "); + strcat(basc, " "); + } + if (dup != 0) { + sprintf(tdup, "Duplicate as above to %s", addr); + printk(KERN_INFO " " + " --- %s ---\n", tdup); + } + printk(KERN_INFO " %s (+%s) : %s [%s]\n", + addr, boff, bhex, basc); + } else { + if (dup >= 1) { + sprintf(tdup, "Duplicate as above to %s", addr); + printk(KERN_INFO " " + " --- %s ---\n", tdup); + } + if (dup != 0) { + printk(KERN_INFO " %s (+%s) : %s [%s]\n", + addr, boff, bhex, basc); + } + } + + return; + +} /* end of ctcmpc_dumpit */ +#endif + +#ifdef DEBUGDATA +/* + * Dump header and first 16 bytes of an sk_buff for debugging purposes. + * + * skb The sk_buff to dump. + * offset Offset relative to skb-data, where to start the dump. + */ +void ctcmpc_dump_skb(struct sk_buff *skb, int offset) +{ + unsigned char *p = skb->data; + struct th_header *header; + struct pdu *pheader; + int bl = skb->len; + int i; + + if (p == NULL) + return; + + p += offset; + header = (struct th_header *)p; + + printk(KERN_INFO "dump:\n"); + printk(KERN_INFO "skb len=%d \n", skb->len); + if (skb->len > 2) { + switch (header->th_ch_flag) { + case TH_HAS_PDU: + break; + case 0x00: + case TH_IS_XID: + if ((header->th_blk_flag == TH_DATA_IS_XID) && + (header->th_is_xid == 0x01)) + goto dumpth; + case TH_SWEEP_REQ: + goto dumpth; + case TH_SWEEP_RESP: + goto dumpth; + default: + break; + } + + pheader = (struct pdu *)p; + printk(KERN_INFO "pdu->offset: %d hex: %04x\n", + pheader->pdu_offset, pheader->pdu_offset); + printk(KERN_INFO "pdu->flag : %02x\n", pheader->pdu_flag); + printk(KERN_INFO "pdu->proto : %02x\n", pheader->pdu_proto); + printk(KERN_INFO "pdu->seq : %02x\n", pheader->pdu_seq); + goto dumpdata; + +dumpth: + printk(KERN_INFO "th->seg : %02x\n", header->th_seg); + printk(KERN_INFO "th->ch : %02x\n", header->th_ch_flag); + printk(KERN_INFO "th->blk_flag: %02x\n", header->th_blk_flag); + printk(KERN_INFO "th->type : %s\n", + (header->th_is_xid) ? "DATA" : "XID"); + printk(KERN_INFO "th->seqnum : %04x\n", header->th_seq_num); + + } +dumpdata: + if (bl > 32) + bl = 32; + printk(KERN_INFO "data: "); + for (i = 0; i < bl; i++) + printk(KERN_INFO "%02x%s", *p++, (i % 16) ? " " : "\n<7>"); + printk(KERN_INFO "\n"); +} +#endif + +/* + * ctc_mpc_alloc_channel + * (exported interface) + * + * Device Initialization : + * ACTPATH driven IO operations + */ +int ctc_mpc_alloc_channel(int port_num, void (*callback)(int, int)) +{ + char device[20]; + struct net_device *dev; + struct mpc_group *grp; + struct ctcm_priv *priv; + + ctcm_pr_debug("ctcmpc enter: %s()\n", __FUNCTION__); + + sprintf(device, "%s%i", MPC_DEVICE_NAME, port_num); + dev = __dev_get_by_name(&init_net, device); + + if (dev == NULL) { + printk(KERN_INFO "ctc_mpc_alloc_channel %s dev=NULL\n", device); + return 1; + } + + priv = dev->priv; + grp = priv->mpcg; + if (!grp) + return 1; + + grp->allochanfunc = callback; + grp->port_num = port_num; + grp->port_persist = 1; + + ctcm_pr_debug("ctcmpc: %s called for device %s state=%s\n", + __FUNCTION__, + dev->name, + fsm_getstate_str(grp->fsm)); + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_INOP: + /* Group is in the process of terminating */ + grp->alloc_called = 1; + break; + case MPCG_STATE_RESET: + /* MPC Group will transition to state */ + /* MPCG_STATE_XID2INITW iff the minimum number */ + /* of 1 read and 1 write channel have successfully*/ + /* activated */ + /*fsm_newstate(grp->fsm, MPCG_STATE_XID2INITW);*/ + if (callback) + grp->send_qllc_disc = 1; + case MPCG_STATE_XID0IOWAIT: + fsm_deltimer(&grp->timer); + grp->outstanding_xid2 = 0; + grp->outstanding_xid7 = 0; + grp->outstanding_xid7_p2 = 0; + grp->saved_xid2 = NULL; + if (callback) + ctcm_open(dev); + fsm_event(priv->fsm, DEV_EVENT_START, dev); + break; + case MPCG_STATE_READY: + /* XID exchanges completed after PORT was activated */ + /* Link station already active */ + /* Maybe timing issue...retry callback */ + grp->allocchan_callback_retries++; + if (grp->allocchan_callback_retries < 4) { + if (grp->allochanfunc) + grp->allochanfunc(grp->port_num, + grp->group_max_buflen); + } else { + /* there are problems...bail out */ + /* there may be a state mismatch so restart */ + grp->port_persist = 1; + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + grp->allocchan_callback_retries = 0; + } + break; + default: + return 0; + + } + + ctcm_pr_debug("ctcmpc exit: %s()\n", __FUNCTION__); + return 0; +} +EXPORT_SYMBOL(ctc_mpc_alloc_channel); + +/* + * ctc_mpc_establish_connectivity + * (exported interface) + */ +void ctc_mpc_establish_connectivity(int port_num, + void (*callback)(int, int, int)) +{ + char device[20]; + struct net_device *dev; + struct mpc_group *grp; + struct ctcm_priv *priv; + struct channel *rch, *wch; + + ctcm_pr_debug("ctcmpc enter: %s()\n", __FUNCTION__); + + sprintf(device, "%s%i", MPC_DEVICE_NAME, port_num); + dev = __dev_get_by_name(&init_net, device); + + if (dev == NULL) { + printk(KERN_INFO "ctc_mpc_establish_connectivity " + "%s dev=NULL\n", device); + return; + } + priv = dev->priv; + rch = priv->channel[READ]; + wch = priv->channel[WRITE]; + + grp = priv->mpcg; + + ctcm_pr_debug("ctcmpc: %s() called for device %s state=%s\n", + __FUNCTION__, dev->name, + fsm_getstate_str(grp->fsm)); + + grp->estconnfunc = callback; + grp->port_num = port_num; + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_READY: + /* XID exchanges completed after PORT was activated */ + /* Link station already active */ + /* Maybe timing issue...retry callback */ + fsm_deltimer(&grp->timer); + grp->estconn_callback_retries++; + if (grp->estconn_callback_retries < 4) { + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, 0, + grp->group_max_buflen); + grp->estconnfunc = NULL; + } + } else { + /* there are problems...bail out */ + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + grp->estconn_callback_retries = 0; + } + break; + case MPCG_STATE_INOP: + case MPCG_STATE_RESET: + /* MPC Group is not ready to start XID - min num of */ + /* 1 read and 1 write channel have not been acquired*/ + printk(KERN_WARNING "ctcmpc: %s() REJECTED ACTIVE XID Req" + "uest - Channel Pair is not Active\n", __FUNCTION__); + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, -1, 0); + grp->estconnfunc = NULL; + } + break; + case MPCG_STATE_XID2INITW: + /* alloc channel was called but no XID exchange */ + /* has occurred. initiate xside XID exchange */ + /* make sure yside XID0 processing has not started */ + if ((fsm_getstate(rch->fsm) > CH_XID0_PENDING) || + (fsm_getstate(wch->fsm) > CH_XID0_PENDING)) { + printk(KERN_WARNING "mpc: %s() ABORT ACTIVE XID" + " Request- PASSIVE XID in process\n" + , __FUNCTION__); + break; + } + grp->send_qllc_disc = 1; + fsm_newstate(grp->fsm, MPCG_STATE_XID0IOWAIT); + fsm_deltimer(&grp->timer); + fsm_addtimer(&grp->timer, MPC_XID_TIMEOUT_VALUE, + MPCG_EVENT_TIMER, dev); + grp->outstanding_xid7 = 0; + grp->outstanding_xid7_p2 = 0; + grp->saved_xid2 = NULL; + if ((rch->in_mpcgroup) && + (fsm_getstate(rch->fsm) == CH_XID0_PENDING)) + fsm_event(grp->fsm, MPCG_EVENT_XID0DO, rch); + else { + printk(KERN_WARNING "mpc: %s() Unable to start" + " ACTIVE XID0 on read channel\n", + __FUNCTION__); + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, -1, 0); + grp->estconnfunc = NULL; + } + fsm_deltimer(&grp->timer); + goto done; + } + if ((wch->in_mpcgroup) && + (fsm_getstate(wch->fsm) == CH_XID0_PENDING)) + fsm_event(grp->fsm, MPCG_EVENT_XID0DO, wch); + else { + printk(KERN_WARNING "mpc: %s() Unable to start" + " ACTIVE XID0 on write channel\n", + __FUNCTION__); + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, -1, 0); + grp->estconnfunc = NULL; + } + fsm_deltimer(&grp->timer); + goto done; + } + break; + case MPCG_STATE_XID0IOWAIT: + /* already in active XID negotiations */ + default: + break; + } + +done: + ctcm_pr_debug("ctcmpc exit: %s()\n", __FUNCTION__); + return; +} +EXPORT_SYMBOL(ctc_mpc_establish_connectivity); + +/* + * ctc_mpc_dealloc_ch + * (exported interface) + */ +void ctc_mpc_dealloc_ch(int port_num) +{ + struct net_device *dev; + char device[20]; + struct ctcm_priv *priv; + struct mpc_group *grp; + + ctcm_pr_debug("ctcmpc enter: %s()\n", __FUNCTION__); + sprintf(device, "%s%i", MPC_DEVICE_NAME, port_num); + dev = __dev_get_by_name(&init_net, device); + + if (dev == NULL) { + printk(KERN_INFO "%s() %s dev=NULL\n", __FUNCTION__, device); + goto done; + } + + ctcm_pr_debug("ctcmpc:%s %s() called for device %s refcount=%d\n", + dev->name, __FUNCTION__, + dev->name, atomic_read(&dev->refcnt)); + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "%s() %s priv=NULL\n", + __FUNCTION__, device); + goto done; + } + fsm_deltimer(&priv->restart_timer); + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_INFO "%s() %s dev=NULL\n", __FUNCTION__, device); + goto done; + } + grp->channels_terminating = 0; + + fsm_deltimer(&grp->timer); + + grp->allochanfunc = NULL; + grp->estconnfunc = NULL; + grp->port_persist = 0; + grp->send_qllc_disc = 0; + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + + ctcm_close(dev); +done: + ctcm_pr_debug("ctcmpc exit: %s()\n", __FUNCTION__); + return; +} +EXPORT_SYMBOL(ctc_mpc_dealloc_ch); + +/* + * ctc_mpc_flow_control + * (exported interface) + */ +void ctc_mpc_flow_control(int port_num, int flowc) +{ + char device[20]; + struct ctcm_priv *priv; + struct mpc_group *grp; + struct net_device *dev; + struct channel *rch; + int mpcg_state; + + ctcm_pr_debug("ctcmpc enter: %s() %i\n", __FUNCTION__, flowc); + + sprintf(device, "%s%i", MPC_DEVICE_NAME, port_num); + dev = __dev_get_by_name(&init_net, device); + + if (dev == NULL) { + printk(KERN_INFO "ctc_mpc_flow_control %s dev=NULL\n", device); + return; + } + + ctcm_pr_debug("ctcmpc: %s %s called \n", dev->name, __FUNCTION__); + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "ctcmpc:%s() %s priv=NULL\n", + __FUNCTION__, device); + return; + } + grp = priv->mpcg; + rch = priv->channel[READ]; + + mpcg_state = fsm_getstate(grp->fsm); + switch (flowc) { + case 1: + if (mpcg_state == MPCG_STATE_FLOWC) + break; + if (mpcg_state == MPCG_STATE_READY) { + if (grp->flow_off_called == 1) + grp->flow_off_called = 0; + else + fsm_newstate(grp->fsm, MPCG_STATE_FLOWC); + break; + } + break; + case 0: + if (mpcg_state == MPCG_STATE_FLOWC) { + fsm_newstate(grp->fsm, MPCG_STATE_READY); + /* ensure any data that has accumulated */ + /* on the io_queue will now be sen t */ + tasklet_schedule(&rch->ch_tasklet); + } + /* possible race condition */ + if (mpcg_state == MPCG_STATE_READY) { + grp->flow_off_called = 1; + break; + } + break; + } + + ctcm_pr_debug("ctcmpc exit: %s() %i\n", __FUNCTION__, flowc); +} +EXPORT_SYMBOL(ctc_mpc_flow_control); + +static int mpc_send_qllc_discontact(struct net_device *); + +/* + * helper function of ctcmpc_unpack_skb +*/ +static void mpc_rcvd_sweep_resp(struct mpcg_info *mpcginfo) +{ + struct channel *rch = mpcginfo->ch; + struct net_device *dev = rch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct channel *ch = priv->channel[WRITE]; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + + if (do_debug_data) + ctcmpc_dumpit((char *)mpcginfo->sweep, TH_SWEEP_LENGTH); + + grp->sweep_rsp_pend_num--; + + if ((grp->sweep_req_pend_num == 0) && + (grp->sweep_rsp_pend_num == 0)) { + fsm_deltimer(&ch->sweep_timer); + grp->in_sweep = 0; + rch->th_seq_num = 0x00; + ch->th_seq_num = 0x00; + ctcm_clear_busy_do(dev); + } + + kfree(mpcginfo); + + return; + +} + +/* + * helper function of mpc_rcvd_sweep_req + * which is a helper of ctcmpc_unpack_skb + */ +static void ctcmpc_send_sweep_resp(struct channel *rch) +{ + struct net_device *dev = rch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + int rc = 0; + struct th_sweep *header; + struct sk_buff *sweep_skb; + struct channel *ch = priv->channel[WRITE]; + + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s(): ch=0x%p id=%s\n", + __FUNCTION__, rch, rch->id); + + sweep_skb = __dev_alloc_skb(MPC_BUFSIZE_DEFAULT, + GFP_ATOMIC|GFP_DMA); + if (sweep_skb == NULL) { + printk(KERN_INFO "Couldn't alloc sweep_skb\n"); + rc = -ENOMEM; + goto done; + } + + header = (struct th_sweep *) + kmalloc(sizeof(struct th_sweep), gfp_type()); + + if (!header) { + dev_kfree_skb_any(sweep_skb); + rc = -ENOMEM; + goto done; + } + + header->th.th_seg = 0x00 ; + header->th.th_ch_flag = TH_SWEEP_RESP; + header->th.th_blk_flag = 0x00; + header->th.th_is_xid = 0x00; + header->th.th_seq_num = 0x00; + header->sw.th_last_seq = ch->th_seq_num; + + memcpy(skb_put(sweep_skb, TH_SWEEP_LENGTH), header, TH_SWEEP_LENGTH); + + kfree(header); + + dev->trans_start = jiffies; + skb_queue_tail(&ch->sweep_queue, sweep_skb); + + fsm_addtimer(&ch->sweep_timer, 100, CTC_EVENT_RSWEEP_TIMER, ch); + + return; + +done: + if (rc != 0) { + grp->in_sweep = 0; + ctcm_clear_busy_do(dev); + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + } + + return; +} + +/* + * helper function of ctcmpc_unpack_skb + */ +static void mpc_rcvd_sweep_req(struct mpcg_info *mpcginfo) +{ + struct channel *rch = mpcginfo->ch; + struct net_device *dev = rch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct channel *ch = priv->channel[WRITE]; + + if (do_debug) + CTCM_DBF_TEXT_(MPC_TRACE, CTC_DBF_DEBUG, + " %s(): ch=0x%p id=%s\n", __FUNCTION__, ch, ch->id); + + if (grp->in_sweep == 0) { + grp->in_sweep = 1; + ctcm_test_and_set_busy(dev); + grp->sweep_req_pend_num = grp->active_channels[READ]; + grp->sweep_rsp_pend_num = grp->active_channels[READ]; + } + + if (do_debug_data) + ctcmpc_dumpit((char *)mpcginfo->sweep, TH_SWEEP_LENGTH); + + grp->sweep_req_pend_num--; + ctcmpc_send_sweep_resp(ch); + kfree(mpcginfo); + return; +} + +/* + * MPC Group Station FSM definitions + */ +static const char *mpcg_event_names[] = { + [MPCG_EVENT_INOP] = "INOP Condition", + [MPCG_EVENT_DISCONC] = "Discontact Received", + [MPCG_EVENT_XID0DO] = "Channel Active - Start XID", + [MPCG_EVENT_XID2] = "XID2 Received", + [MPCG_EVENT_XID2DONE] = "XID0 Complete", + [MPCG_EVENT_XID7DONE] = "XID7 Complete", + [MPCG_EVENT_TIMER] = "XID Setup Timer", + [MPCG_EVENT_DOIO] = "XID DoIO", +}; + +static const char *mpcg_state_names[] = { + [MPCG_STATE_RESET] = "Reset", + [MPCG_STATE_INOP] = "INOP", + [MPCG_STATE_XID2INITW] = "Passive XID- XID0 Pending Start", + [MPCG_STATE_XID2INITX] = "Passive XID- XID0 Pending Complete", + [MPCG_STATE_XID7INITW] = "Passive XID- XID7 Pending P1 Start", + [MPCG_STATE_XID7INITX] = "Passive XID- XID7 Pending P2 Complete", + [MPCG_STATE_XID0IOWAIT] = "Active XID- XID0 Pending Start", + [MPCG_STATE_XID0IOWAIX] = "Active XID- XID0 Pending Complete", + [MPCG_STATE_XID7INITI] = "Active XID- XID7 Pending Start", + [MPCG_STATE_XID7INITZ] = "Active XID- XID7 Pending Complete ", + [MPCG_STATE_XID7INITF] = "XID - XID7 Complete ", + [MPCG_STATE_FLOWC] = "FLOW CONTROL ON", + [MPCG_STATE_READY] = "READY", +}; + +/* + * The MPC Group Station FSM + * 22 events + */ +static const fsm_node mpcg_fsm[] = { + { MPCG_STATE_RESET, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_INOP, MPCG_EVENT_INOP, mpc_action_nop }, + { MPCG_STATE_FLOWC, MPCG_EVENT_INOP, mpc_action_go_inop }, + + { MPCG_STATE_READY, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_READY, MPCG_EVENT_INOP, mpc_action_go_inop }, + + { MPCG_STATE_XID2INITW, MPCG_EVENT_XID0DO, mpc_action_doxid0 }, + { MPCG_STATE_XID2INITW, MPCG_EVENT_XID2, mpc_action_rcvd_xid0 }, + { MPCG_STATE_XID2INITW, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID2INITW, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID2INITW, MPCG_EVENT_DOIO, mpc_action_yside_xid }, + + { MPCG_STATE_XID2INITX, MPCG_EVENT_XID0DO, mpc_action_doxid0 }, + { MPCG_STATE_XID2INITX, MPCG_EVENT_XID2, mpc_action_rcvd_xid0 }, + { MPCG_STATE_XID2INITX, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID2INITX, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID2INITX, MPCG_EVENT_DOIO, mpc_action_yside_xid }, + + { MPCG_STATE_XID7INITW, MPCG_EVENT_XID2DONE, mpc_action_doxid7 }, + { MPCG_STATE_XID7INITW, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_XID7INITW, MPCG_EVENT_XID2, mpc_action_rcvd_xid7 }, + { MPCG_STATE_XID7INITW, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID7INITW, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID7INITW, MPCG_EVENT_XID7DONE, mpc_action_doxid7 }, + { MPCG_STATE_XID7INITW, MPCG_EVENT_DOIO, mpc_action_yside_xid }, + + { MPCG_STATE_XID7INITX, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_XID7INITX, MPCG_EVENT_XID2, mpc_action_rcvd_xid7 }, + { MPCG_STATE_XID7INITX, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID7INITX, MPCG_EVENT_XID7DONE, mpc_action_doxid7 }, + { MPCG_STATE_XID7INITX, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID7INITX, MPCG_EVENT_DOIO, mpc_action_yside_xid }, + + { MPCG_STATE_XID0IOWAIT, MPCG_EVENT_XID0DO, mpc_action_doxid0 }, + { MPCG_STATE_XID0IOWAIT, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_XID0IOWAIT, MPCG_EVENT_XID2, mpc_action_rcvd_xid0 }, + { MPCG_STATE_XID0IOWAIT, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID0IOWAIT, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID0IOWAIT, MPCG_EVENT_DOIO, mpc_action_xside_xid }, + + { MPCG_STATE_XID0IOWAIX, MPCG_EVENT_XID0DO, mpc_action_doxid0 }, + { MPCG_STATE_XID0IOWAIX, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_XID0IOWAIX, MPCG_EVENT_XID2, mpc_action_rcvd_xid0 }, + { MPCG_STATE_XID0IOWAIX, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID0IOWAIX, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID0IOWAIX, MPCG_EVENT_DOIO, mpc_action_xside_xid }, + + { MPCG_STATE_XID7INITI, MPCG_EVENT_XID2DONE, mpc_action_doxid7 }, + { MPCG_STATE_XID7INITI, MPCG_EVENT_XID2, mpc_action_rcvd_xid7 }, + { MPCG_STATE_XID7INITI, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_XID7INITI, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID7INITI, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID7INITI, MPCG_EVENT_XID7DONE, mpc_action_doxid7 }, + { MPCG_STATE_XID7INITI, MPCG_EVENT_DOIO, mpc_action_xside_xid }, + + { MPCG_STATE_XID7INITZ, MPCG_EVENT_XID2, mpc_action_rcvd_xid7 }, + { MPCG_STATE_XID7INITZ, MPCG_EVENT_XID7DONE, mpc_action_doxid7 }, + { MPCG_STATE_XID7INITZ, MPCG_EVENT_DISCONC, mpc_action_discontact }, + { MPCG_STATE_XID7INITZ, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID7INITZ, MPCG_EVENT_TIMER, mpc_action_timeout }, + { MPCG_STATE_XID7INITZ, MPCG_EVENT_DOIO, mpc_action_xside_xid }, + + { MPCG_STATE_XID7INITF, MPCG_EVENT_INOP, mpc_action_go_inop }, + { MPCG_STATE_XID7INITF, MPCG_EVENT_XID7DONE, mpc_action_go_ready }, +}; + +static int mpcg_fsm_len = ARRAY_SIZE(mpcg_fsm); + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_go_ready(fsm_instance *fsm, int event, void *arg) +{ + struct net_device *dev = arg; + struct ctcm_priv *priv = NULL; + struct mpc_group *grp = NULL; + + if (dev == NULL) { + printk(KERN_INFO "%s() dev=NULL\n", __FUNCTION__); + return; + } + + ctcm_pr_debug("ctcmpc enter: %s %s()\n", dev->name, __FUNCTION__); + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "%s() priv=NULL\n", __FUNCTION__); + return; + } + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_INFO "%s() grp=NULL\n", __FUNCTION__); + return; + } + + fsm_deltimer(&grp->timer); + + if (grp->saved_xid2->xid2_flag2 == 0x40) { + priv->xid->xid2_flag2 = 0x00; + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, 1, + grp->group_max_buflen); + grp->estconnfunc = NULL; + } else if (grp->allochanfunc) + grp->send_qllc_disc = 1; + goto done; + } + + grp->port_persist = 1; + grp->out_of_sequence = 0; + grp->estconn_called = 0; + + tasklet_hi_schedule(&grp->mpc_tasklet2); + + ctcm_pr_debug("ctcmpc exit: %s %s()\n", dev->name, __FUNCTION__); + return; + +done: + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + + + ctcm_pr_info("ctcmpc: %s()failure occurred\n", __FUNCTION__); +} + +/* + * helper of ctcm_init_netdevice + * CTCM_PROTO_MPC only + */ +void mpc_group_ready(unsigned long adev) +{ + struct net_device *dev = (struct net_device *)adev; + struct ctcm_priv *priv = NULL; + struct mpc_group *grp = NULL; + struct channel *ch = NULL; + + + ctcm_pr_debug("ctcmpc enter: %s()\n", __FUNCTION__); + + if (dev == NULL) { + printk(KERN_INFO "%s() dev=NULL\n", __FUNCTION__); + return; + } + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "%s() priv=NULL\n", __FUNCTION__); + return; + } + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_INFO "ctcmpc:%s() grp=NULL\n", __FUNCTION__); + return; + } + + printk(KERN_NOTICE "ctcmpc: %s GROUP TRANSITIONED TO READY" + " maxbuf:%d\n", + dev->name, grp->group_max_buflen); + + fsm_newstate(grp->fsm, MPCG_STATE_READY); + + /* Put up a read on the channel */ + ch = priv->channel[READ]; + ch->pdu_seq = 0; + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() ToDCM_pdu_seq= %08x\n" , + __FUNCTION__, ch->pdu_seq); + + ctcmpc_chx_rxidle(ch->fsm, CTC_EVENT_START, ch); + /* Put the write channel in idle state */ + ch = priv->channel[WRITE]; + if (ch->collect_len > 0) { + spin_lock(&ch->collect_lock); + ctcm_purge_skb_queue(&ch->collect_queue); + ch->collect_len = 0; + spin_unlock(&ch->collect_lock); + } + ctcm_chx_txidle(ch->fsm, CTC_EVENT_START, ch); + + ctcm_clear_busy(dev); + + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, 0, + grp->group_max_buflen); + grp->estconnfunc = NULL; + } else + if (grp->allochanfunc) + grp->allochanfunc(grp->port_num, + grp->group_max_buflen); + + grp->send_qllc_disc = 1; + grp->changed_side = 0; + + ctcm_pr_debug("ctcmpc exit: %s()\n", __FUNCTION__); + return; + +} + +/* + * Increment the MPC Group Active Channel Counts + * helper of dev_action (called from channel fsm) + */ +int mpc_channel_action(struct channel *ch, int direction, int action) +{ + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv; + struct mpc_group *grp = NULL; + int rc = 0; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + + if (dev == NULL) { + printk(KERN_INFO "ctcmpc_channel_action %i dev=NULL\n", + action); + rc = 1; + goto done; + } + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO + "ctcmpc_channel_action%i priv=NULL, dev=%s\n", + action, dev->name); + rc = 2; + goto done; + } + + grp = priv->mpcg; + + if (grp == NULL) { + printk(KERN_INFO "ctcmpc: %s()%i mpcgroup=NULL, dev=%s\n", + __FUNCTION__, action, dev->name); + rc = 3; + goto done; + } + + ctcm_pr_info( + "ctcmpc: %s() %i(): Grp:%s total_channel_paths=%i " + "active_channels read=%i, write=%i\n", + __FUNCTION__, + action, + fsm_getstate_str(grp->fsm), + grp->num_channel_paths, + grp->active_channels[READ], + grp->active_channels[WRITE]); + + if ((action == MPC_CHANNEL_ADD) && (ch->in_mpcgroup == 0)) { + grp->num_channel_paths++; + grp->active_channels[direction]++; + grp->outstanding_xid2++; + ch->in_mpcgroup = 1; + + if (ch->xid_skb != NULL) + dev_kfree_skb_any(ch->xid_skb); + + ch->xid_skb = __dev_alloc_skb(MPC_BUFSIZE_DEFAULT, + GFP_ATOMIC | GFP_DMA); + if (ch->xid_skb == NULL) { + printk(KERN_INFO "ctcmpc: %s()" + "Couldn't alloc ch xid_skb\n", __FUNCTION__); + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + return 1; + } + ch->xid_skb_data = ch->xid_skb->data; + ch->xid_th = (struct th_header *)ch->xid_skb->data; + skb_put(ch->xid_skb, TH_HEADER_LENGTH); + ch->xid = (struct xid2 *)skb_tail_pointer(ch->xid_skb); + skb_put(ch->xid_skb, XID2_LENGTH); + ch->xid_id = skb_tail_pointer(ch->xid_skb); + ch->xid_skb->data = ch->xid_skb_data; + skb_reset_tail_pointer(ch->xid_skb); + ch->xid_skb->len = 0; + + memcpy(skb_put(ch->xid_skb, grp->xid_skb->len), + grp->xid_skb->data, + grp->xid_skb->len); + + ch->xid->xid2_dlc_type = ((CHANNEL_DIRECTION(ch->flags) == READ) + ? XID2_READ_SIDE : XID2_WRITE_SIDE); + + if (CHANNEL_DIRECTION(ch->flags) == WRITE) + ch->xid->xid2_buf_len = 0x00; + + ch->xid_skb->data = ch->xid_skb_data; + skb_reset_tail_pointer(ch->xid_skb); + ch->xid_skb->len = 0; + + fsm_newstate(ch->fsm, CH_XID0_PENDING); + + if ((grp->active_channels[READ] > 0) && + (grp->active_channels[WRITE] > 0) && + (fsm_getstate(grp->fsm) < MPCG_STATE_XID2INITW)) { + fsm_newstate(grp->fsm, MPCG_STATE_XID2INITW); + printk(KERN_NOTICE "ctcmpc: %s MPC GROUP " + "CHANNELS ACTIVE\n", dev->name); + } + } else if ((action == MPC_CHANNEL_REMOVE) && + (ch->in_mpcgroup == 1)) { + ch->in_mpcgroup = 0; + grp->num_channel_paths--; + grp->active_channels[direction]--; + + if (ch->xid_skb != NULL) + dev_kfree_skb_any(ch->xid_skb); + ch->xid_skb = NULL; + + if (grp->channels_terminating) + goto done; + + if (((grp->active_channels[READ] == 0) && + (grp->active_channels[WRITE] > 0)) + || ((grp->active_channels[WRITE] == 0) && + (grp->active_channels[READ] > 0))) + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + } + +done: + + if (do_debug) { + ctcm_pr_debug( + "ctcmpc: %s() %i Grp:%s ttl_chan_paths=%i " + "active_chans read=%i, write=%i\n", + __FUNCTION__, + action, + fsm_getstate_str(grp->fsm), + grp->num_channel_paths, + grp->active_channels[READ], + grp->active_channels[WRITE]); + + ctcm_pr_debug("ctcmpc exit : %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + } + return rc; + +} + +/** + * Unpack a just received skb and hand it over to + * upper layers. + * special MPC version of unpack_skb. + * + * ch The channel where this skb has been received. + * pskb The received skb. + */ +static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb) +{ + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct pdu *curr_pdu; + struct mpcg_info *mpcginfo; + struct th_header *header = NULL; + struct th_sweep *sweep = NULL; + int pdu_last_seen = 0; + __u32 new_len; + struct sk_buff *skb; + int skblen; + int sendrc = 0; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s() %s cp:%i ch:%s\n", + __FUNCTION__, dev->name, smp_processor_id(), ch->id); + + header = (struct th_header *)pskb->data; + if ((header->th_seg == 0) && + (header->th_ch_flag == 0) && + (header->th_blk_flag == 0) && + (header->th_seq_num == 0)) + /* nothing for us */ goto done; + + if (do_debug_data) { + ctcm_pr_debug("ctcmpc: %s() th_header\n", __FUNCTION__); + ctcmpc_dumpit((char *)header, TH_HEADER_LENGTH); + ctcm_pr_debug("ctcmpc: %s() pskb len: %04x \n", + __FUNCTION__, pskb->len); + } + + pskb->dev = dev; + pskb->ip_summed = CHECKSUM_UNNECESSARY; + skb_pull(pskb, TH_HEADER_LENGTH); + + if (likely(header->th_ch_flag == TH_HAS_PDU)) { + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() came into th_has_pdu\n", + __FUNCTION__); + if ((fsm_getstate(grp->fsm) == MPCG_STATE_FLOWC) || + ((fsm_getstate(grp->fsm) == MPCG_STATE_READY) && + (header->th_seq_num != ch->th_seq_num + 1) && + (ch->th_seq_num != 0))) { + /* This is NOT the next segment * + * we are not the correct race winner * + * go away and let someone else win * + * BUT..this only applies if xid negot * + * is done * + */ + grp->out_of_sequence += 1; + __skb_push(pskb, TH_HEADER_LENGTH); + skb_queue_tail(&ch->io_queue, pskb); + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() th_seq_num " + "expect:%08x got:%08x\n", __FUNCTION__, + ch->th_seq_num + 1, header->th_seq_num); + + return; + } + grp->out_of_sequence = 0; + ch->th_seq_num = header->th_seq_num; + + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() FromVTAM_th_seq=%08x\n", + __FUNCTION__, ch->th_seq_num); + + if (unlikely(fsm_getstate(grp->fsm) != MPCG_STATE_READY)) + goto done; + pdu_last_seen = 0; + while ((pskb->len > 0) && !pdu_last_seen) { + curr_pdu = (struct pdu *)pskb->data; + if (do_debug_data) { + ctcm_pr_debug("ctcm: %s() pdu_header\n", + __FUNCTION__); + ctcmpc_dumpit((char *)pskb->data, + PDU_HEADER_LENGTH); + ctcm_pr_debug("ctcm: %s() pskb len: %04x \n", + __FUNCTION__, pskb->len); + } + skb_pull(pskb, PDU_HEADER_LENGTH); + + if (curr_pdu->pdu_flag & PDU_LAST) + pdu_last_seen = 1; + if (curr_pdu->pdu_flag & PDU_CNTL) + pskb->protocol = htons(ETH_P_SNAP); + else + pskb->protocol = htons(ETH_P_SNA_DIX); + + if ((pskb->len <= 0) || (pskb->len > ch->max_bufsize)) { + printk(KERN_INFO + "%s Illegal packet size %d " + "received " + "dropping\n", dev->name, + pskb->len); + priv->stats.rx_dropped++; + priv->stats.rx_length_errors++; + goto done; + } + skb_reset_mac_header(pskb); + new_len = curr_pdu->pdu_offset; + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s() new_len: %04x \n", + __FUNCTION__, new_len); + if ((new_len == 0) || (new_len > pskb->len)) { + /* should never happen */ + /* pskb len must be hosed...bail out */ + printk(KERN_INFO + "ctcmpc: %s(): invalid pdu" + " offset of %04x - data may be" + "lost\n", __FUNCTION__, new_len); + goto done; + } + skb = __dev_alloc_skb(new_len+4, GFP_ATOMIC); + + if (!skb) { + printk(KERN_INFO + "ctcm: %s Out of memory in " + "%s()- request-len:%04x \n", + dev->name, + __FUNCTION__, + new_len+4); + priv->stats.rx_dropped++; + fsm_event(grp->fsm, + MPCG_EVENT_INOP, dev); + goto done; + } + + memcpy(skb_put(skb, new_len), + pskb->data, new_len); + + skb_reset_mac_header(skb); + skb->dev = pskb->dev; + skb->protocol = pskb->protocol; + skb->ip_summed = CHECKSUM_UNNECESSARY; + *((__u32 *) skb_push(skb, 4)) = ch->pdu_seq; + ch->pdu_seq++; + + if (do_debug_data) + ctcm_pr_debug("%s: ToDCM_pdu_seq= %08x\n", + __FUNCTION__, ch->pdu_seq); + + ctcm_pr_debug("ctcm: %s() skb:%0lx " + "skb len: %d \n", __FUNCTION__, + (unsigned long)skb, skb->len); + if (do_debug_data) { + ctcm_pr_debug("ctcmpc: %s() up to 32 bytes" + " of pdu_data sent\n", + __FUNCTION__); + ctcmpc_dump32((char *)skb->data, skb->len); + } + + skblen = skb->len; + sendrc = netif_rx(skb); + priv->stats.rx_packets++; + priv->stats.rx_bytes += skblen; + skb_pull(pskb, new_len); /* point to next PDU */ + } + } else { + mpcginfo = (struct mpcg_info *) + kmalloc(sizeof(struct mpcg_info), gfp_type()); + if (mpcginfo == NULL) + goto done; + + mpcginfo->ch = ch; + mpcginfo->th = header; + mpcginfo->skb = pskb; + ctcm_pr_debug("ctcmpc: %s() Not PDU - may be control pkt\n", + __FUNCTION__); + /* it's a sweep? */ + sweep = (struct th_sweep *)pskb->data; + mpcginfo->sweep = sweep; + if (header->th_ch_flag == TH_SWEEP_REQ) + mpc_rcvd_sweep_req(mpcginfo); + else if (header->th_ch_flag == TH_SWEEP_RESP) + mpc_rcvd_sweep_resp(mpcginfo); + else if (header->th_blk_flag == TH_DATA_IS_XID) { + struct xid2 *thisxid = (struct xid2 *)pskb->data; + skb_pull(pskb, XID2_LENGTH); + mpcginfo->xid = thisxid; + fsm_event(grp->fsm, MPCG_EVENT_XID2, mpcginfo); + } else if (header->th_blk_flag == TH_DISCONTACT) + fsm_event(grp->fsm, MPCG_EVENT_DISCONC, mpcginfo); + else if (header->th_seq_num != 0) { + printk(KERN_INFO "%s unexpected packet" + " expected control pkt\n", dev->name); + priv->stats.rx_dropped++; + /* mpcginfo only used for non-data transfers */ + kfree(mpcginfo); + if (do_debug_data) + ctcmpc_dump_skb(pskb, -8); + } + } +done: + + dev_kfree_skb_any(pskb); + if (sendrc == NET_RX_DROP) { + printk(KERN_WARNING "%s %s() NETWORK BACKLOG EXCEEDED" + " - PACKET DROPPED\n", dev->name, __FUNCTION__); + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + } + + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s %s(): ch=0x%p id=%s\n", + dev->name, __FUNCTION__, ch, ch->id); +} + +/** + * tasklet helper for mpc's skb unpacking. + * + * ch The channel to work on. + * Allow flow control back pressure to occur here. + * Throttling back channel can result in excessive + * channel inactivity and system deact of channel + */ +void ctcmpc_bh(unsigned long thischan) +{ + struct channel *ch = (struct channel *)thischan; + struct sk_buff *skb; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + + if (do_debug) + ctcm_pr_debug("%s cp:%i enter: %s() %s\n", + dev->name, smp_processor_id(), __FUNCTION__, ch->id); + /* caller has requested driver to throttle back */ + while ((fsm_getstate(grp->fsm) != MPCG_STATE_FLOWC) && + (skb = skb_dequeue(&ch->io_queue))) { + ctcmpc_unpack_skb(ch, skb); + if (grp->out_of_sequence > 20) { + /* assume data loss has occurred if */ + /* missing seq_num for extended */ + /* period of time */ + grp->out_of_sequence = 0; + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + break; + } + if (skb == skb_peek(&ch->io_queue)) + break; + } + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s %s(): ch=0x%p id=%s\n", + dev->name, __FUNCTION__, ch, ch->id); + return; +} + +/* + * MPC Group Initializations + */ +struct mpc_group *ctcmpc_init_mpc_group(struct ctcm_priv *priv) +{ + struct mpc_group *grp; + + CTCM_DBF_TEXT(MPC_SETUP, 3, __FUNCTION__); + + grp = kzalloc(sizeof(struct mpc_group), GFP_KERNEL); + if (grp == NULL) + return NULL; + + grp->fsm = + init_fsm("mpcg", mpcg_state_names, mpcg_event_names, + MPCG_NR_STATES, MPCG_NR_EVENTS, mpcg_fsm, + mpcg_fsm_len, GFP_KERNEL); + if (grp->fsm == NULL) { + kfree(grp); + return NULL; + } + + fsm_newstate(grp->fsm, MPCG_STATE_RESET); + fsm_settimer(grp->fsm, &grp->timer); + + grp->xid_skb = + __dev_alloc_skb(MPC_BUFSIZE_DEFAULT, GFP_ATOMIC | GFP_DMA); + if (grp->xid_skb == NULL) { + printk(KERN_INFO "Couldn't alloc MPCgroup xid_skb\n"); + kfree_fsm(grp->fsm); + kfree(grp); + return NULL; + } + /* base xid for all channels in group */ + grp->xid_skb_data = grp->xid_skb->data; + grp->xid_th = (struct th_header *)grp->xid_skb->data; + memcpy(skb_put(grp->xid_skb, TH_HEADER_LENGTH), + &thnorm, TH_HEADER_LENGTH); + + grp->xid = (struct xid2 *) skb_tail_pointer(grp->xid_skb); + memcpy(skb_put(grp->xid_skb, XID2_LENGTH), &init_xid, XID2_LENGTH); + grp->xid->xid2_adj_id = jiffies | 0xfff00000; + grp->xid->xid2_sender_id = jiffies; + + grp->xid_id = skb_tail_pointer(grp->xid_skb); + memcpy(skb_put(grp->xid_skb, 4), "VTAM", 4); + + grp->rcvd_xid_skb = + __dev_alloc_skb(MPC_BUFSIZE_DEFAULT, GFP_ATOMIC|GFP_DMA); + if (grp->rcvd_xid_skb == NULL) { + printk(KERN_INFO "Couldn't alloc MPCgroup rcvd_xid_skb\n"); + kfree_fsm(grp->fsm); + dev_kfree_skb(grp->xid_skb); + kfree(grp); + return NULL; + } + grp->rcvd_xid_data = grp->rcvd_xid_skb->data; + grp->rcvd_xid_th = (struct th_header *)grp->rcvd_xid_skb->data; + memcpy(skb_put(grp->rcvd_xid_skb, TH_HEADER_LENGTH), + &thnorm, TH_HEADER_LENGTH); + grp->saved_xid2 = NULL; + priv->xid = grp->xid; + priv->mpcg = grp; + return grp; +} + +/* + * The MPC Group Station FSM + */ + +/* + * MPC Group Station FSM actions + * CTCM_PROTO_MPC only + */ + +/** + * NOP action for statemachines + */ +static void mpc_action_nop(fsm_instance *fi, int event, void *arg) +{ +} + +/* + * invoked when the device transitions to dev_stopped + * MPC will stop each individual channel if a single XID failure + * occurs, or will intitiate all channels be stopped if a GROUP + * level failure occurs. + */ +static void mpc_action_go_inop(fsm_instance *fi, int event, void *arg) +{ + struct net_device *dev = arg; + struct ctcm_priv *priv; + struct mpc_group *grp; + int rc = 0; + struct channel *wch, *rch; + + if (dev == NULL) { + printk(KERN_INFO "%s() dev=NULL\n", __FUNCTION__); + return; + } + + ctcm_pr_debug("ctcmpc enter: %s %s()\n", dev->name, __FUNCTION__); + + priv = dev->priv; + grp = priv->mpcg; + grp->flow_off_called = 0; + + fsm_deltimer(&grp->timer); + + if (grp->channels_terminating) + goto done; + + grp->channels_terminating = 1; + + grp->saved_state = fsm_getstate(grp->fsm); + fsm_newstate(grp->fsm, MPCG_STATE_INOP); + if (grp->saved_state > MPCG_STATE_XID7INITF) + printk(KERN_NOTICE "%s:MPC GROUP INOPERATIVE\n", dev->name); + if ((grp->saved_state != MPCG_STATE_RESET) || + /* dealloc_channel has been called */ + ((grp->saved_state == MPCG_STATE_RESET) && + (grp->port_persist == 0))) + fsm_deltimer(&priv->restart_timer); + + wch = priv->channel[WRITE]; + rch = priv->channel[READ]; + + switch (grp->saved_state) { + case MPCG_STATE_RESET: + case MPCG_STATE_INOP: + case MPCG_STATE_XID2INITW: + case MPCG_STATE_XID0IOWAIT: + case MPCG_STATE_XID2INITX: + case MPCG_STATE_XID7INITW: + case MPCG_STATE_XID7INITX: + case MPCG_STATE_XID0IOWAIX: + case MPCG_STATE_XID7INITI: + case MPCG_STATE_XID7INITZ: + case MPCG_STATE_XID7INITF: + break; + case MPCG_STATE_FLOWC: + case MPCG_STATE_READY: + default: + tasklet_hi_schedule(&wch->ch_disc_tasklet); + } + + grp->xid2_tgnum = 0; + grp->group_max_buflen = 0; /*min of all received */ + grp->outstanding_xid2 = 0; + grp->outstanding_xid7 = 0; + grp->outstanding_xid7_p2 = 0; + grp->saved_xid2 = NULL; + grp->xidnogood = 0; + grp->changed_side = 0; + + grp->rcvd_xid_skb->data = grp->rcvd_xid_data; + skb_reset_tail_pointer(grp->rcvd_xid_skb); + grp->rcvd_xid_skb->len = 0; + grp->rcvd_xid_th = (struct th_header *)grp->rcvd_xid_skb->data; + memcpy(skb_put(grp->rcvd_xid_skb, TH_HEADER_LENGTH), &thnorm, + TH_HEADER_LENGTH); + + if (grp->send_qllc_disc == 1) { + grp->send_qllc_disc = 0; + rc = mpc_send_qllc_discontact(dev); + } + + /* DO NOT issue DEV_EVENT_STOP directly out of this code */ + /* This can result in INOP of VTAM PU due to halting of */ + /* outstanding IO which causes a sense to be returned */ + /* Only about 3 senses are allowed and then IOS/VTAM will*/ + /* ebcome unreachable without manual intervention */ + if ((grp->port_persist == 1) || (grp->alloc_called)) { + grp->alloc_called = 0; + fsm_deltimer(&priv->restart_timer); + fsm_addtimer(&priv->restart_timer, + 500, + DEV_EVENT_RESTART, + dev); + fsm_newstate(grp->fsm, MPCG_STATE_RESET); + if (grp->saved_state > MPCG_STATE_XID7INITF) + printk(KERN_NOTICE "%s:MPC GROUP RECOVERY SCHEDULED\n", + dev->name); + } else { + fsm_deltimer(&priv->restart_timer); + fsm_addtimer(&priv->restart_timer, 500, DEV_EVENT_STOP, dev); + fsm_newstate(grp->fsm, MPCG_STATE_RESET); + printk(KERN_NOTICE "%s:MPC GROUP RECOVERY NOT ATTEMPTED\n", + dev->name); + } + +done: + ctcm_pr_debug("ctcmpc exit:%s %s()\n", dev->name, __FUNCTION__); + return; +} + +/** + * Handle mpc group action timeout. + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + * + * fi An instance of an mpc_group fsm. + * event The event, just happened. + * arg Generic pointer, casted from net_device * upon call. + */ +static void mpc_action_timeout(fsm_instance *fi, int event, void *arg) +{ + struct net_device *dev = arg; + struct ctcm_priv *priv; + struct mpc_group *grp; + struct channel *wch; + struct channel *rch; + + CTCM_DBF_TEXT(MPC_TRACE, 6, __FUNCTION__); + + if (dev == NULL) { + CTCM_DBF_TEXT_(MPC_ERROR, 4, "%s: dev=NULL\n", __FUNCTION__); + return; + } + + priv = dev->priv; + grp = priv->mpcg; + wch = priv->channel[WRITE]; + rch = priv->channel[READ]; + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_XID2INITW: + /* Unless there is outstanding IO on the */ + /* channel just return and wait for ATTN */ + /* interrupt to begin XID negotiations */ + if ((fsm_getstate(rch->fsm) == CH_XID0_PENDING) && + (fsm_getstate(wch->fsm) == CH_XID0_PENDING)) + break; + default: + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + } + + CTCM_DBF_TEXT_(MPC_TRACE, 6, "%s: dev=%s exit", + __FUNCTION__, dev->name); + return; +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +void mpc_action_discontact(fsm_instance *fi, int event, void *arg) +{ + struct mpcg_info *mpcginfo = arg; + struct channel *ch = mpcginfo->ch; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + + if (ch == NULL) { + printk(KERN_INFO "%s() ch=NULL\n", __FUNCTION__); + return; + } + if (ch->netdev == NULL) { + printk(KERN_INFO "%s() dev=NULL\n", __FUNCTION__); + return; + } + + ctcm_pr_debug("ctcmpc enter: %s %s()\n", dev->name, __FUNCTION__); + + grp->send_qllc_disc = 1; + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + + ctcm_pr_debug("ctcmpc exit: %s %s()\n", dev->name, __FUNCTION__); + return; +} + +/* + * MPC Group Station - not part of FSM + * CTCM_PROTO_MPC only + * called from add_channel in ctcm_main.c + */ +void mpc_action_send_discontact(unsigned long thischan) +{ + struct channel *ch; + struct net_device *dev; + struct ctcm_priv *priv; + struct mpc_group *grp; + int rc = 0; + unsigned long saveflags; + + ch = (struct channel *)thischan; + dev = ch->netdev; + priv = dev->priv; + grp = priv->mpcg; + + ctcm_pr_info("ctcmpc: %s cp:%i enter: %s() GrpState:%s ChState:%s\n", + dev->name, + smp_processor_id(), + __FUNCTION__, + fsm_getstate_str(grp->fsm), + fsm_getstate_str(ch->fsm)); + saveflags = 0; /* avoids compiler warning with + spin_unlock_irqrestore */ + + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + rc = ccw_device_start(ch->cdev, &ch->ccw[15], + (unsigned long)ch, 0xff, 0); + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + + if (rc != 0) { + ctcm_pr_info("ctcmpc: %s() ch:%s IO failed \n", + __FUNCTION__, + ch->id); + ctcm_ccw_check_rc(ch, rc, "send discontact"); + /* Not checking return code value here */ + /* Making best effort to notify partner*/ + /* that MPC Group is going down */ + } + + ctcm_pr_debug("ctcmpc exit: %s %s()\n", dev->name, __FUNCTION__); + return; +} + + +/* + * helper function of mpc FSM + * CTCM_PROTO_MPC only + * mpc_action_rcvd_xid7 +*/ +static int mpc_validate_xid(struct mpcg_info *mpcginfo) +{ + struct channel *ch = mpcginfo->ch; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + struct xid2 *xid = mpcginfo->xid; + int failed = 0; + int rc = 0; + __u64 our_id, their_id = 0; + int len; + + len = TH_HEADER_LENGTH + PDU_HEADER_LENGTH; + + ctcm_pr_debug("ctcmpc enter: %s()\n", __FUNCTION__); + + if (mpcginfo->xid == NULL) { + printk(KERN_INFO "%s() xid=NULL\n", __FUNCTION__); + rc = 1; + goto done; + } + + ctcm_pr_debug("ctcmpc : %s xid received()\n", __FUNCTION__); + ctcmpc_dumpit((char *)mpcginfo->xid, XID2_LENGTH); + + /*the received direction should be the opposite of ours */ + if (((CHANNEL_DIRECTION(ch->flags) == READ) ? XID2_WRITE_SIDE : + XID2_READ_SIDE) != xid->xid2_dlc_type) { + failed = 1; + printk(KERN_INFO "ctcmpc:%s() XID REJECTED - READ-WRITE CH " + "Pairing Invalid \n", __FUNCTION__); + } + + if (xid->xid2_dlc_type == XID2_READ_SIDE) { + ctcm_pr_debug("ctcmpc: %s(): grpmaxbuf:%d xid2buflen:%d\n", + __FUNCTION__, grp->group_max_buflen, + xid->xid2_buf_len); + + if (grp->group_max_buflen == 0 || + grp->group_max_buflen > xid->xid2_buf_len - len) + grp->group_max_buflen = xid->xid2_buf_len - len; + } + + + if (grp->saved_xid2 == NULL) { + grp->saved_xid2 = + (struct xid2 *)skb_tail_pointer(grp->rcvd_xid_skb); + + memcpy(skb_put(grp->rcvd_xid_skb, + XID2_LENGTH), xid, XID2_LENGTH); + grp->rcvd_xid_skb->data = grp->rcvd_xid_data; + + skb_reset_tail_pointer(grp->rcvd_xid_skb); + grp->rcvd_xid_skb->len = 0; + + /* convert two 32 bit numbers into 1 64 bit for id compare */ + our_id = (__u64)priv->xid->xid2_adj_id; + our_id = our_id << 32; + our_id = our_id + priv->xid->xid2_sender_id; + their_id = (__u64)xid->xid2_adj_id; + their_id = their_id << 32; + their_id = their_id + xid->xid2_sender_id; + /* lower id assume the xside role */ + if (our_id < their_id) { + grp->roll = XSIDE; + ctcm_pr_debug("ctcmpc :%s() WE HAVE LOW ID-" + "TAKE XSIDE\n", __FUNCTION__); + } else { + grp->roll = YSIDE; + ctcm_pr_debug("ctcmpc :%s() WE HAVE HIGH ID-" + "TAKE YSIDE\n", __FUNCTION__); + } + + } else { + if (xid->xid2_flag4 != grp->saved_xid2->xid2_flag4) { + failed = 1; + printk(KERN_INFO "%s XID REJECTED - XID Flag Byte4\n", + __FUNCTION__); + } + if (xid->xid2_flag2 == 0x40) { + failed = 1; + printk(KERN_INFO "%s XID REJECTED - XID NOGOOD\n", + __FUNCTION__); + } + if (xid->xid2_adj_id != grp->saved_xid2->xid2_adj_id) { + failed = 1; + printk(KERN_INFO "%s XID REJECTED - " + "Adjacent Station ID Mismatch\n", + __FUNCTION__); + } + if (xid->xid2_sender_id != grp->saved_xid2->xid2_sender_id) { + failed = 1; + printk(KERN_INFO "%s XID REJECTED - " + "Sender Address Mismatch\n", __FUNCTION__); + + } + } + + if (failed) { + ctcm_pr_info("ctcmpc : %s() failed\n", __FUNCTION__); + priv->xid->xid2_flag2 = 0x40; + grp->saved_xid2->xid2_flag2 = 0x40; + rc = 1; + } + +done: + + ctcm_pr_debug("ctcmpc exit: %s()\n", __FUNCTION__); + return rc; +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_side_xid(fsm_instance *fsm, void *arg, int side) +{ + struct channel *ch = arg; + struct ctcm_priv *priv; + struct mpc_group *grp = NULL; + struct net_device *dev = NULL; + int rc = 0; + int gotlock = 0; + unsigned long saveflags = 0; /* avoids compiler warning with + spin_unlock_irqrestore */ + + if (ch == NULL) { + printk(KERN_INFO "%s ch=NULL\n", __FUNCTION__); + goto done; + } + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + + dev = ch->netdev; + if (dev == NULL) { + printk(KERN_INFO "%s dev=NULL\n", __FUNCTION__); + goto done; + } + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "%s priv=NULL\n", __FUNCTION__); + goto done; + } + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_INFO "%s grp=NULL\n", __FUNCTION__); + goto done; + } + + if (ctcm_checkalloc_buffer(ch)) + goto done; + + /* skb data-buffer referencing: */ + + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + /* result of the previous 3 statements is NOT always + * already set after ctcm_checkalloc_buffer + * because of possible reuse of the trans_skb + */ + memset(ch->trans_skb->data, 0, 16); + ch->rcvd_xid_th = (struct th_header *)ch->trans_skb_data; + /* check is main purpose here: */ + skb_put(ch->trans_skb, TH_HEADER_LENGTH); + ch->rcvd_xid = (struct xid2 *)skb_tail_pointer(ch->trans_skb); + /* check is main purpose here: */ + skb_put(ch->trans_skb, XID2_LENGTH); + ch->rcvd_xid_id = skb_tail_pointer(ch->trans_skb); + /* cleanup back to startpoint */ + ch->trans_skb->data = ch->trans_skb_data; + skb_reset_tail_pointer(ch->trans_skb); + ch->trans_skb->len = 0; + + /* non-checking rewrite of above skb data-buffer referencing: */ + /* + memset(ch->trans_skb->data, 0, 16); + ch->rcvd_xid_th = (struct th_header *)ch->trans_skb_data; + ch->rcvd_xid = (struct xid2 *)(ch->trans_skb_data + TH_HEADER_LENGTH); + ch->rcvd_xid_id = ch->trans_skb_data + TH_HEADER_LENGTH + XID2_LENGTH; + */ + + ch->ccw[8].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[8].count = 0; + ch->ccw[8].cda = 0x00; + + if (side == XSIDE) { + /* mpc_action_xside_xid */ + if (ch->xid_th == NULL) { + printk(KERN_INFO "%s ch->xid_th=NULL\n", __FUNCTION__); + goto done; + } + ch->ccw[9].cmd_code = CCW_CMD_WRITE; + ch->ccw[9].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[9].count = TH_HEADER_LENGTH; + ch->ccw[9].cda = virt_to_phys(ch->xid_th); + + if (ch->xid == NULL) { + printk(KERN_INFO "%s ch->xid=NULL\n", __FUNCTION__); + goto done; + } + + ch->ccw[10].cmd_code = CCW_CMD_WRITE; + ch->ccw[10].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[10].count = XID2_LENGTH; + ch->ccw[10].cda = virt_to_phys(ch->xid); + + ch->ccw[11].cmd_code = CCW_CMD_READ; + ch->ccw[11].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[11].count = TH_HEADER_LENGTH; + ch->ccw[11].cda = virt_to_phys(ch->rcvd_xid_th); + + ch->ccw[12].cmd_code = CCW_CMD_READ; + ch->ccw[12].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[12].count = XID2_LENGTH; + ch->ccw[12].cda = virt_to_phys(ch->rcvd_xid); + + ch->ccw[13].cmd_code = CCW_CMD_READ; + ch->ccw[13].cda = virt_to_phys(ch->rcvd_xid_id); + + } else { /* side == YSIDE : mpc_action_yside_xid */ + ch->ccw[9].cmd_code = CCW_CMD_READ; + ch->ccw[9].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[9].count = TH_HEADER_LENGTH; + ch->ccw[9].cda = virt_to_phys(ch->rcvd_xid_th); + + ch->ccw[10].cmd_code = CCW_CMD_READ; + ch->ccw[10].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[10].count = XID2_LENGTH; + ch->ccw[10].cda = virt_to_phys(ch->rcvd_xid); + + if (ch->xid_th == NULL) { + printk(KERN_INFO "%s ch->xid_th=NULL\n", __FUNCTION__); + goto done; + } + ch->ccw[11].cmd_code = CCW_CMD_WRITE; + ch->ccw[11].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[11].count = TH_HEADER_LENGTH; + ch->ccw[11].cda = virt_to_phys(ch->xid_th); + + if (ch->xid == NULL) { + printk(KERN_INFO "%s ch->xid=NULL\n", __FUNCTION__); + goto done; + } + ch->ccw[12].cmd_code = CCW_CMD_WRITE; + ch->ccw[12].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[12].count = XID2_LENGTH; + ch->ccw[12].cda = virt_to_phys(ch->xid); + + if (ch->xid_id == NULL) { + printk(KERN_INFO "%s ch->xid_id=NULL\n", __FUNCTION__); + goto done; + } + ch->ccw[13].cmd_code = CCW_CMD_WRITE; + ch->ccw[13].cda = virt_to_phys(ch->xid_id); + + } + ch->ccw[13].flags = CCW_FLAG_SLI | CCW_FLAG_CC; + ch->ccw[13].count = 4; + + ch->ccw[14].cmd_code = CCW_CMD_NOOP; + ch->ccw[14].flags = CCW_FLAG_SLI; + ch->ccw[14].count = 0; + ch->ccw[14].cda = 0; + + if (do_debug_ccw) + ctcmpc_dumpit((char *)&ch->ccw[8], sizeof(struct ccw1) * 7); + + ctcmpc_dumpit((char *)ch->xid_th, TH_HEADER_LENGTH); + ctcmpc_dumpit((char *)ch->xid, XID2_LENGTH); + ctcmpc_dumpit((char *)ch->xid_id, 4); + if (!in_irq()) { + /* Such conditional locking is a known problem for + * sparse because its static undeterministic. + * Warnings should be ignored here. */ + spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); + gotlock = 1; + } + + fsm_addtimer(&ch->timer, 5000 , CTC_EVENT_TIMER, ch); + rc = ccw_device_start(ch->cdev, &ch->ccw[8], + (unsigned long)ch, 0xff, 0); + + if (gotlock) /* see remark above about conditional locking */ + spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); + + if (rc != 0) { + ctcm_pr_info("ctcmpc: %s() ch:%s IO failed \n", + __FUNCTION__, ch->id); + ctcm_ccw_check_rc(ch, rc, + (side == XSIDE) ? "x-side XID" : "y-side XID"); + } + +done: + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + return; + +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_xside_xid(fsm_instance *fsm, int event, void *arg) +{ + mpc_action_side_xid(fsm, arg, XSIDE); +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_yside_xid(fsm_instance *fsm, int event, void *arg) +{ + mpc_action_side_xid(fsm, arg, YSIDE); +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_doxid0(fsm_instance *fsm, int event, void *arg) +{ + struct channel *ch = arg; + struct ctcm_priv *priv; + struct mpc_group *grp = NULL; + struct net_device *dev = NULL; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + + if (ch == NULL) { + printk(KERN_WARNING "%s ch=NULL\n", __FUNCTION__); + goto done; + } + + dev = ch->netdev; + if (dev == NULL) { + printk(KERN_WARNING "%s dev=NULL\n", __FUNCTION__); + goto done; + } + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_WARNING "%s priv=NULL\n", __FUNCTION__); + goto done; + } + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_WARNING "%s grp=NULL\n", __FUNCTION__); + goto done; + } + + if (ch->xid == NULL) { + printk(KERN_WARNING "%s ch-xid=NULL\n", __FUNCTION__); + goto done; + } + + fsm_newstate(ch->fsm, CH_XID0_INPROGRESS); + + ch->xid->xid2_option = XID2_0; + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_XID2INITW: + case MPCG_STATE_XID2INITX: + ch->ccw[8].cmd_code = CCW_CMD_SENSE_CMD; + break; + case MPCG_STATE_XID0IOWAIT: + case MPCG_STATE_XID0IOWAIX: + ch->ccw[8].cmd_code = CCW_CMD_WRITE_CTL; + break; + } + + fsm_event(grp->fsm, MPCG_EVENT_DOIO, ch); + +done: + if (do_debug) + ctcm_pr_debug("ctcmpc exit : %s(): ch=0x%p id=%s\n", + __FUNCTION__, ch, ch->id); + return; + +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only +*/ +static void mpc_action_doxid7(fsm_instance *fsm, int event, void *arg) +{ + struct net_device *dev = arg; + struct ctcm_priv *priv = NULL; + struct mpc_group *grp = NULL; + int direction; + int rc = 0; + int send = 0; + + ctcm_pr_debug("ctcmpc enter: %s() \n", __FUNCTION__); + + if (dev == NULL) { + printk(KERN_INFO "%s dev=NULL \n", __FUNCTION__); + rc = 1; + goto done; + } + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "%s priv=NULL \n", __FUNCTION__); + rc = 1; + goto done; + } + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_INFO "%s grp=NULL \n", __FUNCTION__); + rc = 1; + goto done; + } + + for (direction = READ; direction <= WRITE; direction++) { + struct channel *ch = priv->channel[direction]; + struct xid2 *thisxid = ch->xid; + ch->xid_skb->data = ch->xid_skb_data; + skb_reset_tail_pointer(ch->xid_skb); + ch->xid_skb->len = 0; + thisxid->xid2_option = XID2_7; + send = 0; + + /* xid7 phase 1 */ + if (grp->outstanding_xid7_p2 > 0) { + if (grp->roll == YSIDE) { + if (fsm_getstate(ch->fsm) == CH_XID7_PENDING1) { + fsm_newstate(ch->fsm, CH_XID7_PENDING2); + ch->ccw[8].cmd_code = CCW_CMD_SENSE_CMD; + memcpy(skb_put(ch->xid_skb, + TH_HEADER_LENGTH), + &thdummy, TH_HEADER_LENGTH); + send = 1; + } + } else if (fsm_getstate(ch->fsm) < CH_XID7_PENDING2) { + fsm_newstate(ch->fsm, CH_XID7_PENDING2); + ch->ccw[8].cmd_code = CCW_CMD_WRITE_CTL; + memcpy(skb_put(ch->xid_skb, + TH_HEADER_LENGTH), + &thnorm, TH_HEADER_LENGTH); + send = 1; + } + } else { + /* xid7 phase 2 */ + if (grp->roll == YSIDE) { + if (fsm_getstate(ch->fsm) < CH_XID7_PENDING4) { + fsm_newstate(ch->fsm, CH_XID7_PENDING4); + memcpy(skb_put(ch->xid_skb, + TH_HEADER_LENGTH), + &thnorm, TH_HEADER_LENGTH); + ch->ccw[8].cmd_code = CCW_CMD_WRITE_CTL; + send = 1; + } + } else if (fsm_getstate(ch->fsm) == CH_XID7_PENDING3) { + fsm_newstate(ch->fsm, CH_XID7_PENDING4); + ch->ccw[8].cmd_code = CCW_CMD_SENSE_CMD; + memcpy(skb_put(ch->xid_skb, TH_HEADER_LENGTH), + &thdummy, TH_HEADER_LENGTH); + send = 1; + } + } + + if (send) + fsm_event(grp->fsm, MPCG_EVENT_DOIO, ch); + } + +done: + + if (rc != 0) + fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); + + return; +} + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_rcvd_xid0(fsm_instance *fsm, int event, void *arg) +{ + + struct mpcg_info *mpcginfo = arg; + struct channel *ch = mpcginfo->ch; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv; + struct mpc_group *grp; + + if (do_debug) + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + + priv = dev->priv; + grp = priv->mpcg; + + ctcm_pr_debug("ctcmpc in:%s() %s xid2:%i xid7:%i xidt_p2:%i \n", + __FUNCTION__, ch->id, + grp->outstanding_xid2, + grp->outstanding_xid7, + grp->outstanding_xid7_p2); + + if (fsm_getstate(ch->fsm) < CH_XID7_PENDING) + fsm_newstate(ch->fsm, CH_XID7_PENDING); + + grp->outstanding_xid2--; + grp->outstanding_xid7++; + grp->outstanding_xid7_p2++; + + /* must change state before validating xid to */ + /* properly handle interim interrupts received*/ + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_XID2INITW: + fsm_newstate(grp->fsm, MPCG_STATE_XID2INITX); + mpc_validate_xid(mpcginfo); + break; + case MPCG_STATE_XID0IOWAIT: + fsm_newstate(grp->fsm, MPCG_STATE_XID0IOWAIX); + mpc_validate_xid(mpcginfo); + break; + case MPCG_STATE_XID2INITX: + if (grp->outstanding_xid2 == 0) { + fsm_newstate(grp->fsm, MPCG_STATE_XID7INITW); + mpc_validate_xid(mpcginfo); + fsm_event(grp->fsm, MPCG_EVENT_XID2DONE, dev); + } + break; + case MPCG_STATE_XID0IOWAIX: + if (grp->outstanding_xid2 == 0) { + fsm_newstate(grp->fsm, MPCG_STATE_XID7INITI); + mpc_validate_xid(mpcginfo); + fsm_event(grp->fsm, MPCG_EVENT_XID2DONE, dev); + } + break; + } + kfree(mpcginfo); + + if (do_debug) { + ctcm_pr_debug("ctcmpc:%s() %s xid2:%i xid7:%i xidt_p2:%i \n", + __FUNCTION__, ch->id, + grp->outstanding_xid2, + grp->outstanding_xid7, + grp->outstanding_xid7_p2); + ctcm_pr_debug("ctcmpc:%s() %s grpstate: %s chanstate: %s \n", + __FUNCTION__, ch->id, + fsm_getstate_str(grp->fsm), + fsm_getstate_str(ch->fsm)); + } + return; + +} + + +/* + * MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static void mpc_action_rcvd_xid7(fsm_instance *fsm, int event, void *arg) +{ + struct mpcg_info *mpcginfo = arg; + struct channel *ch = mpcginfo->ch; + struct net_device *dev = ch->netdev; + struct ctcm_priv *priv = dev->priv; + struct mpc_group *grp = priv->mpcg; + + if (do_debug) { + ctcm_pr_debug("ctcmpc enter: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + + ctcm_pr_debug("ctcmpc: outstanding_xid7: %i, " + " outstanding_xid7_p2: %i\n", + grp->outstanding_xid7, + grp->outstanding_xid7_p2); + } + + grp->outstanding_xid7--; + ch->xid_skb->data = ch->xid_skb_data; + skb_reset_tail_pointer(ch->xid_skb); + ch->xid_skb->len = 0; + + switch (fsm_getstate(grp->fsm)) { + case MPCG_STATE_XID7INITI: + fsm_newstate(grp->fsm, MPCG_STATE_XID7INITZ); + mpc_validate_xid(mpcginfo); + break; + case MPCG_STATE_XID7INITW: + fsm_newstate(grp->fsm, MPCG_STATE_XID7INITX); + mpc_validate_xid(mpcginfo); + break; + case MPCG_STATE_XID7INITZ: + case MPCG_STATE_XID7INITX: + if (grp->outstanding_xid7 == 0) { + if (grp->outstanding_xid7_p2 > 0) { + grp->outstanding_xid7 = + grp->outstanding_xid7_p2; + grp->outstanding_xid7_p2 = 0; + } else + fsm_newstate(grp->fsm, MPCG_STATE_XID7INITF); + + mpc_validate_xid(mpcginfo); + fsm_event(grp->fsm, MPCG_EVENT_XID7DONE, dev); + break; + } + mpc_validate_xid(mpcginfo); + break; + } + + kfree(mpcginfo); + + if (do_debug) + ctcm_pr_debug("ctcmpc exit: %s(): cp=%i ch=0x%p id=%s\n", + __FUNCTION__, smp_processor_id(), ch, ch->id); + return; + +} + +/* + * mpc_action helper of an MPC Group Station FSM action + * CTCM_PROTO_MPC only + */ +static int mpc_send_qllc_discontact(struct net_device *dev) +{ + int rc = 0; + __u32 new_len = 0; + struct sk_buff *skb; + struct qllc *qllcptr; + struct ctcm_priv *priv; + struct mpc_group *grp; + + ctcm_pr_debug("ctcmpc enter: %s()\n", __FUNCTION__); + + if (dev == NULL) { + printk(KERN_INFO "%s() dev=NULL\n", __FUNCTION__); + rc = 1; + goto done; + } + + priv = dev->priv; + if (priv == NULL) { + printk(KERN_INFO "%s() priv=NULL\n", __FUNCTION__); + rc = 1; + goto done; + } + + grp = priv->mpcg; + if (grp == NULL) { + printk(KERN_INFO "%s() grp=NULL\n", __FUNCTION__); + rc = 1; + goto done; + } + ctcm_pr_info("ctcmpc: %s() GROUP STATE: %s\n", __FUNCTION__, + mpcg_state_names[grp->saved_state]); + + switch (grp->saved_state) { + /* + * establish conn callback function is + * preferred method to report failure + */ + case MPCG_STATE_XID0IOWAIT: + case MPCG_STATE_XID0IOWAIX: + case MPCG_STATE_XID7INITI: + case MPCG_STATE_XID7INITZ: + case MPCG_STATE_XID2INITW: + case MPCG_STATE_XID2INITX: + case MPCG_STATE_XID7INITW: + case MPCG_STATE_XID7INITX: + if (grp->estconnfunc) { + grp->estconnfunc(grp->port_num, -1, 0); + grp->estconnfunc = NULL; + break; + } + case MPCG_STATE_FLOWC: + case MPCG_STATE_READY: + grp->send_qllc_disc = 2; + new_len = sizeof(struct qllc); + qllcptr = kzalloc(new_len, gfp_type() | GFP_DMA); + if (qllcptr == NULL) { + printk(KERN_INFO + "ctcmpc: Out of memory in %s()\n", + dev->name); + rc = 1; + goto done; + } + + qllcptr->qllc_address = 0xcc; + qllcptr->qllc_commands = 0x03; + + skb = __dev_alloc_skb(new_len, GFP_ATOMIC); + + if (skb == NULL) { + printk(KERN_INFO "%s Out of memory in mpc_send_qllc\n", + dev->name); + priv->stats.rx_dropped++; + rc = 1; + kfree(qllcptr); + goto done; + } + + memcpy(skb_put(skb, new_len), qllcptr, new_len); + kfree(qllcptr); + + if (skb_headroom(skb) < 4) { + printk(KERN_INFO "ctcmpc: %s() Unable to" + " build discontact for %s\n", + __FUNCTION__, dev->name); + rc = 1; + dev_kfree_skb_any(skb); + goto done; + } + + *((__u32 *)skb_push(skb, 4)) = priv->channel[READ]->pdu_seq; + priv->channel[READ]->pdu_seq++; + if (do_debug_data) + ctcm_pr_debug("ctcmpc: %s ToDCM_pdu_seq= %08x\n", + __FUNCTION__, priv->channel[READ]->pdu_seq); + + /* receipt of CC03 resets anticipated sequence number on + receiving side */ + priv->channel[READ]->pdu_seq = 0x00; + skb_reset_mac_header(skb); + skb->dev = dev; + skb->protocol = htons(ETH_P_SNAP); + skb->ip_summed = CHECKSUM_UNNECESSARY; + + ctcmpc_dumpit((char *)skb->data, (sizeof(struct qllc) + 4)); + + netif_rx(skb); + break; + default: + break; + + } + +done: + ctcm_pr_debug("ctcmpc exit: %s()\n", __FUNCTION__); + return rc; +} +/* --- This is the END my friend --- */ + diff --git a/drivers/s390/net/ctcm_mpc.h b/drivers/s390/net/ctcm_mpc.h new file mode 100644 index 000000000000..f99686069a91 --- /dev/null +++ b/drivers/s390/net/ctcm_mpc.h @@ -0,0 +1,239 @@ +/* + * drivers/s390/net/ctcm_mpc.h + * + * Copyright IBM Corp. 2007 + * Authors: Peter Tiedemann (ptiedem@de.ibm.com) + * + * MPC additions: + * Belinda Thompson (belindat@us.ibm.com) + * Andy Richter (richtera@us.ibm.com) + */ + +#ifndef _CTC_MPC_H_ +#define _CTC_MPC_H_ + +#include +#include "fsm.h" + +/* + * MPC external interface + * Note that ctc_mpc_xyz are called with a lock on ................ + */ + +/* port_number is the mpc device 0, 1, 2 etc mpc2 is port_number 2 */ + +/* passive open Just wait for XID2 exchange */ +extern int ctc_mpc_alloc_channel(int port, + void (*callback)(int port_num, int max_write_size)); +/* active open Alloc then send XID2 */ +extern void ctc_mpc_establish_connectivity(int port, + void (*callback)(int port_num, int rc, int max_write_size)); + +extern void ctc_mpc_dealloc_ch(int port); +extern void ctc_mpc_flow_control(int port, int flowc); + +/* + * other MPC Group prototypes and structures + */ + +#define ETH_P_SNA_DIX 0x80D5 + +/* + * Declaration of an XID2 + * + */ +#define ALLZEROS 0x0000000000000000 + +#define XID_FM2 0x20 +#define XID2_0 0x00 +#define XID2_7 0x07 +#define XID2_WRITE_SIDE 0x04 +#define XID2_READ_SIDE 0x05 + +struct xid2 { + __u8 xid2_type_id; + __u8 xid2_len; + __u32 xid2_adj_id; + __u8 xid2_rlen; + __u8 xid2_resv1; + __u8 xid2_flag1; + __u8 xid2_fmtt; + __u8 xid2_flag4; + __u16 xid2_resv2; + __u8 xid2_tgnum; + __u32 xid2_sender_id; + __u8 xid2_flag2; + __u8 xid2_option; + char xid2_resv3[8]; + __u16 xid2_resv4; + __u8 xid2_dlc_type; + __u16 xid2_resv5; + __u8 xid2_mpc_flag; + __u8 xid2_resv6; + __u16 xid2_buf_len; + char xid2_buffer[255 - (13 * sizeof(__u8) + + 2 * sizeof(__u32) + + 4 * sizeof(__u16) + + 8 * sizeof(char))]; +} __attribute__ ((packed)); + +#define XID2_LENGTH (sizeof(struct xid2)) + +struct th_header { + __u8 th_seg; + __u8 th_ch_flag; +#define TH_HAS_PDU 0xf0 +#define TH_IS_XID 0x01 +#define TH_SWEEP_REQ 0xfe +#define TH_SWEEP_RESP 0xff + __u8 th_blk_flag; +#define TH_DATA_IS_XID 0x80 +#define TH_RETRY 0x40 +#define TH_DISCONTACT 0xc0 +#define TH_SEG_BLK 0x20 +#define TH_LAST_SEG 0x10 +#define TH_PDU_PART 0x08 + __u8 th_is_xid; /* is 0x01 if this is XID */ + __u32 th_seq_num; +} __attribute__ ((packed)); + +struct th_addon { + __u32 th_last_seq; + __u32 th_resvd; +} __attribute__ ((packed)); + +struct th_sweep { + struct th_header th; + struct th_addon sw; +} __attribute__ ((packed)); + +#define TH_HEADER_LENGTH (sizeof(struct th_header)) +#define TH_SWEEP_LENGTH (sizeof(struct th_sweep)) + +#define PDU_LAST 0x80 +#define PDU_CNTL 0x40 +#define PDU_FIRST 0x20 + +struct pdu { + __u32 pdu_offset; + __u8 pdu_flag; + __u8 pdu_proto; /* 0x01 is APPN SNA */ + __u16 pdu_seq; +} __attribute__ ((packed)); + +#define PDU_HEADER_LENGTH (sizeof(struct pdu)) + +struct qllc { + __u8 qllc_address; +#define QLLC_REQ 0xFF +#define QLLC_RESP 0x00 + __u8 qllc_commands; +#define QLLC_DISCONNECT 0x53 +#define QLLC_UNSEQACK 0x73 +#define QLLC_SETMODE 0x93 +#define QLLC_EXCHID 0xBF +} __attribute__ ((packed)); + + +/* + * Definition of one MPC group + */ + +#define MAX_MPCGCHAN 10 +#define MPC_XID_TIMEOUT_VALUE 10000 +#define MPC_CHANNEL_ADD 0 +#define MPC_CHANNEL_REMOVE 1 +#define MPC_CHANNEL_ATTN 2 +#define XSIDE 1 +#define YSIDE 0 + +struct mpcg_info { + struct sk_buff *skb; + struct channel *ch; + struct xid2 *xid; + struct th_sweep *sweep; + struct th_header *th; +}; + +struct mpc_group { + struct tasklet_struct mpc_tasklet; + struct tasklet_struct mpc_tasklet2; + int changed_side; + int saved_state; + int channels_terminating; + int out_of_sequence; + int flow_off_called; + int port_num; + int port_persist; + int alloc_called; + __u32 xid2_adj_id; + __u8 xid2_tgnum; + __u32 xid2_sender_id; + int num_channel_paths; + int active_channels[2]; + __u16 group_max_buflen; + int outstanding_xid2; + int outstanding_xid7; + int outstanding_xid7_p2; + int sweep_req_pend_num; + int sweep_rsp_pend_num; + struct sk_buff *xid_skb; + char *xid_skb_data; + struct th_header *xid_th; + struct xid2 *xid; + char *xid_id; + struct th_header *rcvd_xid_th; + struct sk_buff *rcvd_xid_skb; + char *rcvd_xid_data; + __u8 in_sweep; + __u8 roll; + struct xid2 *saved_xid2; + void (*allochanfunc)(int, int); + int allocchan_callback_retries; + void (*estconnfunc)(int, int, int); + int estconn_callback_retries; + int estconn_called; + int xidnogood; + int send_qllc_disc; + fsm_timer timer; + fsm_instance *fsm; /* group xid fsm */ +}; + +#ifdef DEBUGDATA +void ctcmpc_dumpit(char *buf, int len); +#else +static inline void ctcmpc_dumpit(char *buf, int len) +{ +} +#endif + +#ifdef DEBUGDATA +/* + * Dump header and first 16 bytes of an sk_buff for debugging purposes. + * + * skb The struct sk_buff to dump. + * offset Offset relative to skb-data, where to start the dump. + */ +void ctcmpc_dump_skb(struct sk_buff *skb, int offset); +#else +static inline void ctcmpc_dump_skb(struct sk_buff *skb, int offset) +{} +#endif + +static inline void ctcmpc_dump32(char *buf, int len) +{ + if (len < 32) + ctcmpc_dumpit(buf, len); + else + ctcmpc_dumpit(buf, 32); +} + +int ctcmpc_open(struct net_device *); +void ctcm_ccw_check_rc(struct channel *, int, char *); +void mpc_group_ready(unsigned long adev); +int mpc_channel_action(struct channel *ch, int direction, int action); +void mpc_action_send_discontact(unsigned long thischan); +void mpc_action_discontact(fsm_instance *fi, int event, void *arg); +void ctcmpc_bh(unsigned long thischan); +#endif +/* --- This is the END my friend --- */ diff --git a/drivers/s390/net/ctcm_sysfs.c b/drivers/s390/net/ctcm_sysfs.c new file mode 100644 index 000000000000..bb2d13721d34 --- /dev/null +++ b/drivers/s390/net/ctcm_sysfs.c @@ -0,0 +1,210 @@ +/* + * drivers/s390/net/ctcm_sysfs.c + * + * Copyright IBM Corp. 2007, 2007 + * Authors: Peter Tiedemann (ptiedem@de.ibm.com) + * + */ + +#undef DEBUG +#undef DEBUGDATA +#undef DEBUGCCW + +#include +#include "ctcm_main.h" + +/* + * sysfs attributes + */ + +static ssize_t ctcm_buffer_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ctcm_priv *priv = dev_get_drvdata(dev); + + if (!priv) + return -ENODEV; + return sprintf(buf, "%d\n", priv->buffer_size); +} + +static ssize_t ctcm_buffer_write(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct net_device *ndev; + int bs1; + struct ctcm_priv *priv = dev_get_drvdata(dev); + + if (!(priv && priv->channel[READ] && + (ndev = priv->channel[READ]->netdev))) { + CTCM_DBF_TEXT(SETUP, CTC_DBF_ERROR, "bfnondev"); + return -ENODEV; + } + + sscanf(buf, "%u", &bs1); + if (bs1 > CTCM_BUFSIZE_LIMIT) + goto einval; + if (bs1 < (576 + LL_HEADER_LENGTH + 2)) + goto einval; + priv->buffer_size = bs1; /* just to overwrite the default */ + + if ((ndev->flags & IFF_RUNNING) && + (bs1 < (ndev->mtu + LL_HEADER_LENGTH + 2))) + goto einval; + + priv->channel[READ]->max_bufsize = bs1; + priv->channel[WRITE]->max_bufsize = bs1; + if (!(ndev->flags & IFF_RUNNING)) + ndev->mtu = bs1 - LL_HEADER_LENGTH - 2; + priv->channel[READ]->flags |= CHANNEL_FLAGS_BUFSIZE_CHANGED; + priv->channel[WRITE]->flags |= CHANNEL_FLAGS_BUFSIZE_CHANGED; + + CTCM_DBF_DEV(SETUP, ndev, buf); + return count; + +einval: + CTCM_DBF_DEV(SETUP, ndev, "buff_err"); + return -EINVAL; +} + +static void ctcm_print_statistics(struct ctcm_priv *priv) +{ + char *sbuf; + char *p; + + if (!priv) + return; + sbuf = kmalloc(2048, GFP_KERNEL); + if (sbuf == NULL) + return; + p = sbuf; + + p += sprintf(p, " Device FSM state: %s\n", + fsm_getstate_str(priv->fsm)); + p += sprintf(p, " RX channel FSM state: %s\n", + fsm_getstate_str(priv->channel[READ]->fsm)); + p += sprintf(p, " TX channel FSM state: %s\n", + fsm_getstate_str(priv->channel[WRITE]->fsm)); + p += sprintf(p, " Max. TX buffer used: %ld\n", + priv->channel[WRITE]->prof.maxmulti); + p += sprintf(p, " Max. chained SKBs: %ld\n", + priv->channel[WRITE]->prof.maxcqueue); + p += sprintf(p, " TX single write ops: %ld\n", + priv->channel[WRITE]->prof.doios_single); + p += sprintf(p, " TX multi write ops: %ld\n", + priv->channel[WRITE]->prof.doios_multi); + p += sprintf(p, " Netto bytes written: %ld\n", + priv->channel[WRITE]->prof.txlen); + p += sprintf(p, " Max. TX IO-time: %ld\n", + priv->channel[WRITE]->prof.tx_time); + + printk(KERN_INFO "Statistics for %s:\n%s", + priv->channel[WRITE]->netdev->name, sbuf); + kfree(sbuf); + return; +} + +static ssize_t stats_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ctcm_priv *priv = dev_get_drvdata(dev); + if (!priv) + return -ENODEV; + ctcm_print_statistics(priv); + return sprintf(buf, "0\n"); +} + +static ssize_t stats_write(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct ctcm_priv *priv = dev_get_drvdata(dev); + if (!priv) + return -ENODEV; + /* Reset statistics */ + memset(&priv->channel[WRITE]->prof, 0, + sizeof(priv->channel[WRITE]->prof)); + return count; +} + +static ssize_t ctcm_proto_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ctcm_priv *priv = dev_get_drvdata(dev); + if (!priv) + return -ENODEV; + + return sprintf(buf, "%d\n", priv->protocol); +} + +static ssize_t ctcm_proto_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + int value; + struct ctcm_priv *priv = dev_get_drvdata(dev); + + if (!priv) + return -ENODEV; + sscanf(buf, "%u", &value); + if (!((value == CTCM_PROTO_S390) || + (value == CTCM_PROTO_LINUX) || + (value == CTCM_PROTO_MPC) || + (value == CTCM_PROTO_OS390))) + return -EINVAL; + priv->protocol = value; + CTCM_DBF_DEV(SETUP, dev, buf); + + return count; +} + +static ssize_t ctcm_type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct ccwgroup_device *cgdev; + + cgdev = to_ccwgroupdev(dev); + if (!cgdev) + return -ENODEV; + + return sprintf(buf, "%s\n", + cu3088_type[cgdev->cdev[0]->id.driver_info]); +} + +static DEVICE_ATTR(buffer, 0644, ctcm_buffer_show, ctcm_buffer_write); +static DEVICE_ATTR(protocol, 0644, ctcm_proto_show, ctcm_proto_store); +static DEVICE_ATTR(type, 0444, ctcm_type_show, NULL); +static DEVICE_ATTR(stats, 0644, stats_show, stats_write); + +static struct attribute *ctcm_attr[] = { + &dev_attr_protocol.attr, + &dev_attr_type.attr, + &dev_attr_buffer.attr, + NULL, +}; + +static struct attribute_group ctcm_attr_group = { + .attrs = ctcm_attr, +}; + +int ctcm_add_attributes(struct device *dev) +{ + int rc; + + rc = device_create_file(dev, &dev_attr_stats); + + return rc; +} + +void ctcm_remove_attributes(struct device *dev) +{ + device_remove_file(dev, &dev_attr_stats); +} + +int ctcm_add_files(struct device *dev) +{ + return sysfs_create_group(&dev->kobj, &ctcm_attr_group); +} + +void ctcm_remove_files(struct device *dev) +{ + sysfs_remove_group(&dev->kobj, &ctcm_attr_group); +} + -- cgit v1.2.3-59-g8ed1b From 04885948b101c44cbec9dacfb49b28290a899012 Mon Sep 17 00:00:00 2001 From: Peter Tiedemann Date: Fri, 8 Feb 2008 00:03:50 +0100 Subject: ctc: removal of the old ctc driver ctc driver is replaced by a new ctcm driver. The ctcm driver supports the channel-to-channel connections of the old ctc driver plus an additional MPC protocol to provide SNA connectivity. This patch removes the functions of the old ctc driver. Signed-off-by: Peter Tiedemann Signed-off-by: Ursula Braun Signed-off-by: Jeff Garzik --- drivers/s390/net/ctcdbug.c | 80 -- drivers/s390/net/ctcdbug.h | 125 -- drivers/s390/net/ctcmain.c | 3062 -------------------------------------------- drivers/s390/net/ctcmain.h | 270 ---- 4 files changed, 3537 deletions(-) delete mode 100644 drivers/s390/net/ctcdbug.c delete mode 100644 drivers/s390/net/ctcdbug.h delete mode 100644 drivers/s390/net/ctcmain.c delete mode 100644 drivers/s390/net/ctcmain.h (limited to 'drivers/s390') diff --git a/drivers/s390/net/ctcdbug.c b/drivers/s390/net/ctcdbug.c deleted file mode 100644 index e6e72deb36b5..000000000000 --- a/drivers/s390/net/ctcdbug.c +++ /dev/null @@ -1,80 +0,0 @@ -/* - * - * linux/drivers/s390/net/ctcdbug.c - * - * CTC / ESCON network driver - s390 dbf exploit. - * - * Copyright 2000,2003 IBM Corporation - * - * Author(s): Original Code written by - * Peter Tiedemann (ptiedem@de.ibm.com) - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2, or (at your option) - * any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - */ - -#include "ctcdbug.h" - -/** - * Debug Facility Stuff - */ -debug_info_t *ctc_dbf_setup = NULL; -debug_info_t *ctc_dbf_data = NULL; -debug_info_t *ctc_dbf_trace = NULL; - -DEFINE_PER_CPU(char[256], ctc_dbf_txt_buf); - -void -ctc_unregister_dbf_views(void) -{ - if (ctc_dbf_setup) - debug_unregister(ctc_dbf_setup); - if (ctc_dbf_data) - debug_unregister(ctc_dbf_data); - if (ctc_dbf_trace) - debug_unregister(ctc_dbf_trace); -} -int -ctc_register_dbf_views(void) -{ - ctc_dbf_setup = debug_register(CTC_DBF_SETUP_NAME, - CTC_DBF_SETUP_PAGES, - CTC_DBF_SETUP_NR_AREAS, - CTC_DBF_SETUP_LEN); - ctc_dbf_data = debug_register(CTC_DBF_DATA_NAME, - CTC_DBF_DATA_PAGES, - CTC_DBF_DATA_NR_AREAS, - CTC_DBF_DATA_LEN); - ctc_dbf_trace = debug_register(CTC_DBF_TRACE_NAME, - CTC_DBF_TRACE_PAGES, - CTC_DBF_TRACE_NR_AREAS, - CTC_DBF_TRACE_LEN); - - if ((ctc_dbf_setup == NULL) || (ctc_dbf_data == NULL) || - (ctc_dbf_trace == NULL)) { - ctc_unregister_dbf_views(); - return -ENOMEM; - } - debug_register_view(ctc_dbf_setup, &debug_hex_ascii_view); - debug_set_level(ctc_dbf_setup, CTC_DBF_SETUP_LEVEL); - - debug_register_view(ctc_dbf_data, &debug_hex_ascii_view); - debug_set_level(ctc_dbf_data, CTC_DBF_DATA_LEVEL); - - debug_register_view(ctc_dbf_trace, &debug_hex_ascii_view); - debug_set_level(ctc_dbf_trace, CTC_DBF_TRACE_LEVEL); - - return 0; -} - diff --git a/drivers/s390/net/ctcdbug.h b/drivers/s390/net/ctcdbug.h deleted file mode 100644 index 413925ee23d1..000000000000 --- a/drivers/s390/net/ctcdbug.h +++ /dev/null @@ -1,125 +0,0 @@ -/* - * - * linux/drivers/s390/net/ctcdbug.h - * - * CTC / ESCON network driver - s390 dbf exploit. - * - * Copyright 2000,2003 IBM Corporation - * - * Author(s): Original Code written by - * Peter Tiedemann (ptiedem@de.ibm.com) - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2, or (at your option) - * any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - */ -#ifndef _CTCDBUG_H_ -#define _CTCDBUG_H_ - -#include -#include "ctcmain.h" -/** - * Debug Facility stuff - */ -#define CTC_DBF_SETUP_NAME "ctc_setup" -#define CTC_DBF_SETUP_LEN 16 -#define CTC_DBF_SETUP_PAGES 8 -#define CTC_DBF_SETUP_NR_AREAS 1 -#define CTC_DBF_SETUP_LEVEL 3 - -#define CTC_DBF_DATA_NAME "ctc_data" -#define CTC_DBF_DATA_LEN 128 -#define CTC_DBF_DATA_PAGES 8 -#define CTC_DBF_DATA_NR_AREAS 1 -#define CTC_DBF_DATA_LEVEL 3 - -#define CTC_DBF_TRACE_NAME "ctc_trace" -#define CTC_DBF_TRACE_LEN 16 -#define CTC_DBF_TRACE_PAGES 4 -#define CTC_DBF_TRACE_NR_AREAS 2 -#define CTC_DBF_TRACE_LEVEL 3 - -#define DBF_TEXT(name,level,text) \ - do { \ - debug_text_event(ctc_dbf_##name,level,text); \ - } while (0) - -#define DBF_HEX(name,level,addr,len) \ - do { \ - debug_event(ctc_dbf_##name,level,(void*)(addr),len); \ - } while (0) - -DECLARE_PER_CPU(char[256], ctc_dbf_txt_buf); -extern debug_info_t *ctc_dbf_setup; -extern debug_info_t *ctc_dbf_data; -extern debug_info_t *ctc_dbf_trace; - - -#define DBF_TEXT_(name,level,text...) \ - do { \ - char* ctc_dbf_txt_buf = get_cpu_var(ctc_dbf_txt_buf); \ - sprintf(ctc_dbf_txt_buf, text); \ - debug_text_event(ctc_dbf_##name,level,ctc_dbf_txt_buf); \ - put_cpu_var(ctc_dbf_txt_buf); \ - } while (0) - -#define DBF_SPRINTF(name,level,text...) \ - do { \ - debug_sprintf_event(ctc_dbf_trace, level, ##text ); \ - debug_sprintf_event(ctc_dbf_trace, level, text ); \ - } while (0) - - -int ctc_register_dbf_views(void); - -void ctc_unregister_dbf_views(void); - -/** - * some more debug stuff - */ - -#define HEXDUMP16(importance,header,ptr) \ -PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \ - "%02x %02x %02x %02x %02x %02x %02x %02x\n", \ - *(((char*)ptr)),*(((char*)ptr)+1),*(((char*)ptr)+2), \ - *(((char*)ptr)+3),*(((char*)ptr)+4),*(((char*)ptr)+5), \ - *(((char*)ptr)+6),*(((char*)ptr)+7),*(((char*)ptr)+8), \ - *(((char*)ptr)+9),*(((char*)ptr)+10),*(((char*)ptr)+11), \ - *(((char*)ptr)+12),*(((char*)ptr)+13), \ - *(((char*)ptr)+14),*(((char*)ptr)+15)); \ -PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \ - "%02x %02x %02x %02x %02x %02x %02x %02x\n", \ - *(((char*)ptr)+16),*(((char*)ptr)+17), \ - *(((char*)ptr)+18),*(((char*)ptr)+19), \ - *(((char*)ptr)+20),*(((char*)ptr)+21), \ - *(((char*)ptr)+22),*(((char*)ptr)+23), \ - *(((char*)ptr)+24),*(((char*)ptr)+25), \ - *(((char*)ptr)+26),*(((char*)ptr)+27), \ - *(((char*)ptr)+28),*(((char*)ptr)+29), \ - *(((char*)ptr)+30),*(((char*)ptr)+31)); - -static inline void -hex_dump(unsigned char *buf, size_t len) -{ - size_t i; - - for (i = 0; i < len; i++) { - if (i && !(i % 16)) - printk("\n"); - printk("%02x ", *(buf + i)); - } - printk("\n"); -} - - -#endif diff --git a/drivers/s390/net/ctcmain.c b/drivers/s390/net/ctcmain.c deleted file mode 100644 index 77a503139e32..000000000000 --- a/drivers/s390/net/ctcmain.c +++ /dev/null @@ -1,3062 +0,0 @@ -/* - * CTC / ESCON network driver - * - * Copyright (C) 2001 IBM Deutschland Entwicklung GmbH, IBM Corporation - * Author(s): Fritz Elfert (elfert@de.ibm.com, felfert@millenux.com) - * Fixes by : Jochen Röhrig (roehrig@de.ibm.com) - * Arnaldo Carvalho de Melo - Peter Tiedemann (ptiedem@de.ibm.com) - * Driver Model stuff by : Cornelia Huck - * - * Documentation used: - * - Principles of Operation (IBM doc#: SA22-7201-06) - * - Common IO/-Device Commands and Self Description (IBM doc#: SA22-7204-02) - * - Common IO/-Device Commands and Self Description (IBM doc#: SN22-5535) - * - ESCON Channel-to-Channel Adapter (IBM doc#: SA22-7203-00) - * - ESCON I/O Interface (IBM doc#: SA22-7202-029 - * - * and the source of the original CTC driver by: - * Dieter Wellerdiek (wel@de.ibm.com) - * Martin Schwidefsky (schwidefsky@de.ibm.com) - * Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com) - * Jochen Röhrig (roehrig@de.ibm.com) - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2, or (at your option) - * any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ -#undef DEBUG -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include - -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include - -#include - -#include "fsm.h" -#include "cu3088.h" - -#include "ctcdbug.h" -#include "ctcmain.h" - -MODULE_AUTHOR("(C) 2000 IBM Corp. by Fritz Elfert (felfert@millenux.com)"); -MODULE_DESCRIPTION("Linux for S/390 CTC/Escon Driver"); -MODULE_LICENSE("GPL"); -/** - * States of the interface statemachine. - */ -enum dev_states { - DEV_STATE_STOPPED, - DEV_STATE_STARTWAIT_RXTX, - DEV_STATE_STARTWAIT_RX, - DEV_STATE_STARTWAIT_TX, - DEV_STATE_STOPWAIT_RXTX, - DEV_STATE_STOPWAIT_RX, - DEV_STATE_STOPWAIT_TX, - DEV_STATE_RUNNING, - /** - * MUST be always the last element!! - */ - CTC_NR_DEV_STATES -}; - -static const char *dev_state_names[] = { - "Stopped", - "StartWait RXTX", - "StartWait RX", - "StartWait TX", - "StopWait RXTX", - "StopWait RX", - "StopWait TX", - "Running", -}; - -/** - * Events of the interface statemachine. - */ -enum dev_events { - DEV_EVENT_START, - DEV_EVENT_STOP, - DEV_EVENT_RXUP, - DEV_EVENT_TXUP, - DEV_EVENT_RXDOWN, - DEV_EVENT_TXDOWN, - DEV_EVENT_RESTART, - /** - * MUST be always the last element!! - */ - CTC_NR_DEV_EVENTS -}; - -static const char *dev_event_names[] = { - "Start", - "Stop", - "RX up", - "TX up", - "RX down", - "TX down", - "Restart", -}; - -/** - * Events of the channel statemachine - */ -enum ch_events { - /** - * Events, representing return code of - * I/O operations (ccw_device_start, ccw_device_halt et al.) - */ - CH_EVENT_IO_SUCCESS, - CH_EVENT_IO_EBUSY, - CH_EVENT_IO_ENODEV, - CH_EVENT_IO_EIO, - CH_EVENT_IO_UNKNOWN, - - CH_EVENT_ATTNBUSY, - CH_EVENT_ATTN, - CH_EVENT_BUSY, - - /** - * Events, representing unit-check - */ - CH_EVENT_UC_RCRESET, - CH_EVENT_UC_RSRESET, - CH_EVENT_UC_TXTIMEOUT, - CH_EVENT_UC_TXPARITY, - CH_EVENT_UC_HWFAIL, - CH_EVENT_UC_RXPARITY, - CH_EVENT_UC_ZERO, - CH_EVENT_UC_UNKNOWN, - - /** - * Events, representing subchannel-check - */ - CH_EVENT_SC_UNKNOWN, - - /** - * Events, representing machine checks - */ - CH_EVENT_MC_FAIL, - CH_EVENT_MC_GOOD, - - /** - * Event, representing normal IRQ - */ - CH_EVENT_IRQ, - CH_EVENT_FINSTAT, - - /** - * Event, representing timer expiry. - */ - CH_EVENT_TIMER, - - /** - * Events, representing commands from upper levels. - */ - CH_EVENT_START, - CH_EVENT_STOP, - - /** - * MUST be always the last element!! - */ - NR_CH_EVENTS, -}; - -/** - * States of the channel statemachine. - */ -enum ch_states { - /** - * Channel not assigned to any device, - * initial state, direction invalid - */ - CH_STATE_IDLE, - - /** - * Channel assigned but not operating - */ - CH_STATE_STOPPED, - CH_STATE_STARTWAIT, - CH_STATE_STARTRETRY, - CH_STATE_SETUPWAIT, - CH_STATE_RXINIT, - CH_STATE_TXINIT, - CH_STATE_RX, - CH_STATE_TX, - CH_STATE_RXIDLE, - CH_STATE_TXIDLE, - CH_STATE_RXERR, - CH_STATE_TXERR, - CH_STATE_TERM, - CH_STATE_DTERM, - CH_STATE_NOTOP, - - /** - * MUST be always the last element!! - */ - NR_CH_STATES, -}; - -static int loglevel = CTC_LOGLEVEL_DEFAULT; - -/** - * Linked list of all detected channels. - */ -static struct channel *channels = NULL; - -/** - * Print Banner. - */ -static void -print_banner(void) -{ - static int printed = 0; - - if (printed) - return; - - printk(KERN_INFO "CTC driver initialized\n"); - printed = 1; -} - -/** - * Return type of a detected device. - */ -static enum channel_types -get_channel_type(struct ccw_device_id *id) -{ - enum channel_types type = (enum channel_types) id->driver_info; - - if (type == channel_type_ficon) - type = channel_type_escon; - - return type; -} - -static const char *ch_event_names[] = { - "ccw_device success", - "ccw_device busy", - "ccw_device enodev", - "ccw_device ioerr", - "ccw_device unknown", - - "Status ATTN & BUSY", - "Status ATTN", - "Status BUSY", - - "Unit check remote reset", - "Unit check remote system reset", - "Unit check TX timeout", - "Unit check TX parity", - "Unit check Hardware failure", - "Unit check RX parity", - "Unit check ZERO", - "Unit check Unknown", - - "SubChannel check Unknown", - - "Machine check failure", - "Machine check operational", - - "IRQ normal", - "IRQ final", - - "Timer", - - "Start", - "Stop", -}; - -static const char *ch_state_names[] = { - "Idle", - "Stopped", - "StartWait", - "StartRetry", - "SetupWait", - "RX init", - "TX init", - "RX", - "TX", - "RX idle", - "TX idle", - "RX error", - "TX error", - "Terminating", - "Restarting", - "Not operational", -}; - -#ifdef DEBUG -/** - * Dump header and first 16 bytes of an sk_buff for debugging purposes. - * - * @param skb The sk_buff to dump. - * @param offset Offset relative to skb-data, where to start the dump. - */ -static void -ctc_dump_skb(struct sk_buff *skb, int offset) -{ - unsigned char *p = skb->data; - __u16 bl; - struct ll_header *header; - int i; - - if (!(loglevel & CTC_LOGLEVEL_DEBUG)) - return; - p += offset; - bl = *((__u16 *) p); - p += 2; - header = (struct ll_header *) p; - p -= 2; - - printk(KERN_DEBUG "dump:\n"); - printk(KERN_DEBUG "blocklen=%d %04x\n", bl, bl); - - printk(KERN_DEBUG "h->length=%d %04x\n", header->length, - header->length); - printk(KERN_DEBUG "h->type=%04x\n", header->type); - printk(KERN_DEBUG "h->unused=%04x\n", header->unused); - if (bl > 16) - bl = 16; - printk(KERN_DEBUG "data: "); - for (i = 0; i < bl; i++) - printk("%02x%s", *p++, (i % 16) ? " " : "\n<7>"); - printk("\n"); -} -#else -static inline void -ctc_dump_skb(struct sk_buff *skb, int offset) -{ -} -#endif - -/** - * Unpack a just received skb and hand it over to - * upper layers. - * - * @param ch The channel where this skb has been received. - * @param pskb The received skb. - */ -static void -ctc_unpack_skb(struct channel *ch, struct sk_buff *pskb) -{ - struct net_device *dev = ch->netdev; - struct ctc_priv *privptr = (struct ctc_priv *) dev->priv; - __u16 len = *((__u16 *) pskb->data); - - DBF_TEXT(trace, 4, __FUNCTION__); - skb_put(pskb, 2 + LL_HEADER_LENGTH); - skb_pull(pskb, 2); - pskb->dev = dev; - pskb->ip_summed = CHECKSUM_UNNECESSARY; - while (len > 0) { - struct sk_buff *skb; - struct ll_header *header = (struct ll_header *) pskb->data; - - skb_pull(pskb, LL_HEADER_LENGTH); - if ((ch->protocol == CTC_PROTO_S390) && - (header->type != ETH_P_IP)) { - -#ifndef DEBUG - if (!(ch->logflags & LOG_FLAG_ILLEGALPKT)) { -#endif - /** - * Check packet type only if we stick strictly - * to S/390's protocol of OS390. This only - * supports IP. Otherwise allow any packet - * type. - */ - ctc_pr_warn( - "%s Illegal packet type 0x%04x received, dropping\n", - dev->name, header->type); - ch->logflags |= LOG_FLAG_ILLEGALPKT; -#ifndef DEBUG - } -#endif -#ifdef DEBUG - ctc_dump_skb(pskb, -6); -#endif - privptr->stats.rx_dropped++; - privptr->stats.rx_frame_errors++; - return; - } - pskb->protocol = ntohs(header->type); - if (header->length <= LL_HEADER_LENGTH) { -#ifndef DEBUG - if (!(ch->logflags & LOG_FLAG_ILLEGALSIZE)) { -#endif - ctc_pr_warn( - "%s Illegal packet size %d " - "received (MTU=%d blocklen=%d), " - "dropping\n", dev->name, header->length, - dev->mtu, len); - ch->logflags |= LOG_FLAG_ILLEGALSIZE; -#ifndef DEBUG - } -#endif -#ifdef DEBUG - ctc_dump_skb(pskb, -6); -#endif - privptr->stats.rx_dropped++; - privptr->stats.rx_length_errors++; - return; - } - header->length -= LL_HEADER_LENGTH; - len -= LL_HEADER_LENGTH; - if ((header->length > skb_tailroom(pskb)) || - (header->length > len)) { -#ifndef DEBUG - if (!(ch->logflags & LOG_FLAG_OVERRUN)) { -#endif - ctc_pr_warn( - "%s Illegal packet size %d " - "(beyond the end of received data), " - "dropping\n", dev->name, header->length); - ch->logflags |= LOG_FLAG_OVERRUN; -#ifndef DEBUG - } -#endif -#ifdef DEBUG - ctc_dump_skb(pskb, -6); -#endif - privptr->stats.rx_dropped++; - privptr->stats.rx_length_errors++; - return; - } - skb_put(pskb, header->length); - skb_reset_mac_header(pskb); - len -= header->length; - skb = dev_alloc_skb(pskb->len); - if (!skb) { -#ifndef DEBUG - if (!(ch->logflags & LOG_FLAG_NOMEM)) { -#endif - ctc_pr_warn( - "%s Out of memory in ctc_unpack_skb\n", - dev->name); - ch->logflags |= LOG_FLAG_NOMEM; -#ifndef DEBUG - } -#endif - privptr->stats.rx_dropped++; - return; - } - skb_copy_from_linear_data(pskb, skb_put(skb, pskb->len), - pskb->len); - skb_reset_mac_header(skb); - skb->dev = pskb->dev; - skb->protocol = pskb->protocol; - pskb->ip_summed = CHECKSUM_UNNECESSARY; - /** - * reset logflags - */ - ch->logflags = 0; - privptr->stats.rx_packets++; - privptr->stats.rx_bytes += skb->len; - netif_rx_ni(skb); - dev->last_rx = jiffies; - if (len > 0) { - skb_pull(pskb, header->length); - if (skb_tailroom(pskb) < LL_HEADER_LENGTH) { -#ifndef DEBUG - if (!(ch->logflags & LOG_FLAG_OVERRUN)) { -#endif - ctc_pr_warn( - "%s Overrun in ctc_unpack_skb\n", - dev->name); - ch->logflags |= LOG_FLAG_OVERRUN; -#ifndef DEBUG - } -#endif - return; - } - skb_put(pskb, LL_HEADER_LENGTH); - } - } -} - -/** - * Check return code of a preceeding ccw_device call, halt_IO etc... - * - * @param ch The channel, the error belongs to. - * @param return_code The error code to inspect. - */ -static void -ccw_check_return_code(struct channel *ch, int return_code, char *msg) -{ - DBF_TEXT(trace, 5, __FUNCTION__); - switch (return_code) { - case 0: - fsm_event(ch->fsm, CH_EVENT_IO_SUCCESS, ch); - break; - case -EBUSY: - ctc_pr_warn("%s (%s): Busy !\n", ch->id, msg); - fsm_event(ch->fsm, CH_EVENT_IO_EBUSY, ch); - break; - case -ENODEV: - ctc_pr_emerg("%s (%s): Invalid device called for IO\n", - ch->id, msg); - fsm_event(ch->fsm, CH_EVENT_IO_ENODEV, ch); - break; - case -EIO: - ctc_pr_emerg("%s (%s): Status pending... \n", - ch->id, msg); - fsm_event(ch->fsm, CH_EVENT_IO_EIO, ch); - break; - default: - ctc_pr_emerg("%s (%s): Unknown error in do_IO %04x\n", - ch->id, msg, return_code); - fsm_event(ch->fsm, CH_EVENT_IO_UNKNOWN, ch); - } -} - -/** - * Check sense of a unit check. - * - * @param ch The channel, the sense code belongs to. - * @param sense The sense code to inspect. - */ -static void -ccw_unit_check(struct channel *ch, unsigned char sense) -{ - DBF_TEXT(trace, 5, __FUNCTION__); - if (sense & SNS0_INTERVENTION_REQ) { - if (sense & 0x01) { - ctc_pr_debug("%s: Interface disc. or Sel. reset " - "(remote)\n", ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_RCRESET, ch); - } else { - ctc_pr_debug("%s: System reset (remote)\n", ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_RSRESET, ch); - } - } else if (sense & SNS0_EQUIPMENT_CHECK) { - if (sense & SNS0_BUS_OUT_CHECK) { - ctc_pr_warn("%s: Hardware malfunction (remote)\n", - ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_HWFAIL, ch); - } else { - ctc_pr_warn("%s: Read-data parity error (remote)\n", - ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_RXPARITY, ch); - } - } else if (sense & SNS0_BUS_OUT_CHECK) { - if (sense & 0x04) { - ctc_pr_warn("%s: Data-streaming timeout)\n", ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_TXTIMEOUT, ch); - } else { - ctc_pr_warn("%s: Data-transfer parity error\n", ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_TXPARITY, ch); - } - } else if (sense & SNS0_CMD_REJECT) { - ctc_pr_warn("%s: Command reject\n", ch->id); - } else if (sense == 0) { - ctc_pr_debug("%s: Unit check ZERO\n", ch->id); - fsm_event(ch->fsm, CH_EVENT_UC_ZERO, ch); - } else { - ctc_pr_warn("%s: Unit Check with sense code: %02x\n", - ch->id, sense); - fsm_event(ch->fsm, CH_EVENT_UC_UNKNOWN, ch); - } -} - -static void -ctc_purge_skb_queue(struct sk_buff_head *q) -{ - struct sk_buff *skb; - - DBF_TEXT(trace, 5, __FUNCTION__); - - while ((skb = skb_dequeue(q))) { - atomic_dec(&skb->users); - dev_kfree_skb_irq(skb); - } -} - -static int -ctc_checkalloc_buffer(struct channel *ch, int warn) -{ - DBF_TEXT(trace, 5, __FUNCTION__); - if ((ch->trans_skb == NULL) || - (ch->flags & CHANNEL_FLAGS_BUFSIZE_CHANGED)) { - if (ch->trans_skb != NULL) - dev_kfree_skb(ch->trans_skb); - clear_normalized_cda(&ch->ccw[1]); - ch->trans_skb = __dev_alloc_skb(ch->max_bufsize, - GFP_ATOMIC | GFP_DMA); - if (ch->trans_skb == NULL) { - if (warn) - ctc_pr_warn( - "%s: Couldn't alloc %s trans_skb\n", - ch->id, - (CHANNEL_DIRECTION(ch->flags) == READ) ? - "RX" : "TX"); - return -ENOMEM; - } - ch->ccw[1].count = ch->max_bufsize; - if (set_normalized_cda(&ch->ccw[1], ch->trans_skb->data)) { - dev_kfree_skb(ch->trans_skb); - ch->trans_skb = NULL; - if (warn) - ctc_pr_warn( - "%s: set_normalized_cda for %s " - "trans_skb failed, dropping packets\n", - ch->id, - (CHANNEL_DIRECTION(ch->flags) == READ) ? - "RX" : "TX"); - return -ENOMEM; - } - ch->ccw[1].count = 0; - ch->trans_skb_data = ch->trans_skb->data; - ch->flags &= ~CHANNEL_FLAGS_BUFSIZE_CHANGED; - } - return 0; -} - -/** - * Dummy NOP action for statemachines - */ -static void -fsm_action_nop(fsm_instance * fi, int event, void *arg) -{ -} - -/** - * Actions for channel - statemachines. - *****************************************************************************/ - -/** - * Normal data has been send. Free the corresponding - * skb (it's in io_queue), reset dev->tbusy and - * revert to idle state. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_txdone(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - struct ctc_priv *privptr = dev->priv; - struct sk_buff *skb; - int first = 1; - int i; - unsigned long duration; - struct timespec done_stamp = current_kernel_time(); - - DBF_TEXT(trace, 4, __FUNCTION__); - - duration = - (done_stamp.tv_sec - ch->prof.send_stamp.tv_sec) * 1000000 + - (done_stamp.tv_nsec - ch->prof.send_stamp.tv_nsec) / 1000; - if (duration > ch->prof.tx_time) - ch->prof.tx_time = duration; - - if (ch->irb->scsw.count != 0) - ctc_pr_debug("%s: TX not complete, remaining %d bytes\n", - dev->name, ch->irb->scsw.count); - fsm_deltimer(&ch->timer); - while ((skb = skb_dequeue(&ch->io_queue))) { - privptr->stats.tx_packets++; - privptr->stats.tx_bytes += skb->len - LL_HEADER_LENGTH; - if (first) { - privptr->stats.tx_bytes += 2; - first = 0; - } - atomic_dec(&skb->users); - dev_kfree_skb_irq(skb); - } - spin_lock(&ch->collect_lock); - clear_normalized_cda(&ch->ccw[4]); - if (ch->collect_len > 0) { - int rc; - - if (ctc_checkalloc_buffer(ch, 1)) { - spin_unlock(&ch->collect_lock); - return; - } - ch->trans_skb->data = ch->trans_skb_data; - skb_reset_tail_pointer(ch->trans_skb); - ch->trans_skb->len = 0; - if (ch->prof.maxmulti < (ch->collect_len + 2)) - ch->prof.maxmulti = ch->collect_len + 2; - if (ch->prof.maxcqueue < skb_queue_len(&ch->collect_queue)) - ch->prof.maxcqueue = skb_queue_len(&ch->collect_queue); - *((__u16 *) skb_put(ch->trans_skb, 2)) = ch->collect_len + 2; - i = 0; - while ((skb = skb_dequeue(&ch->collect_queue))) { - skb_copy_from_linear_data(skb, skb_put(ch->trans_skb, - skb->len), - skb->len); - privptr->stats.tx_packets++; - privptr->stats.tx_bytes += skb->len - LL_HEADER_LENGTH; - atomic_dec(&skb->users); - dev_kfree_skb_irq(skb); - i++; - } - ch->collect_len = 0; - spin_unlock(&ch->collect_lock); - ch->ccw[1].count = ch->trans_skb->len; - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - ch->prof.send_stamp = current_kernel_time(); - rc = ccw_device_start(ch->cdev, &ch->ccw[0], - (unsigned long) ch, 0xff, 0); - ch->prof.doios_multi++; - if (rc != 0) { - privptr->stats.tx_dropped += i; - privptr->stats.tx_errors += i; - fsm_deltimer(&ch->timer); - ccw_check_return_code(ch, rc, "chained TX"); - } - } else { - spin_unlock(&ch->collect_lock); - fsm_newstate(fi, CH_STATE_TXIDLE); - } - ctc_clear_busy(dev); -} - -/** - * Initial data is sent. - * Notify device statemachine that we are up and - * running. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_txidle(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - - DBF_TEXT(trace, 4, __FUNCTION__); - fsm_deltimer(&ch->timer); - fsm_newstate(fi, CH_STATE_TXIDLE); - fsm_event(((struct ctc_priv *) ch->netdev->priv)->fsm, DEV_EVENT_TXUP, - ch->netdev); -} - -/** - * Got normal data, check for sanity, queue it up, allocate new buffer - * trigger bottom half, and initiate next read. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_rx(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - struct ctc_priv *privptr = dev->priv; - int len = ch->max_bufsize - ch->irb->scsw.count; - struct sk_buff *skb = ch->trans_skb; - __u16 block_len = *((__u16 *) skb->data); - int check_len; - int rc; - - DBF_TEXT(trace, 4, __FUNCTION__); - fsm_deltimer(&ch->timer); - if (len < 8) { - ctc_pr_debug("%s: got packet with length %d < 8\n", - dev->name, len); - privptr->stats.rx_dropped++; - privptr->stats.rx_length_errors++; - goto again; - } - if (len > ch->max_bufsize) { - ctc_pr_debug("%s: got packet with length %d > %d\n", - dev->name, len, ch->max_bufsize); - privptr->stats.rx_dropped++; - privptr->stats.rx_length_errors++; - goto again; - } - - /** - * VM TCP seems to have a bug sending 2 trailing bytes of garbage. - */ - switch (ch->protocol) { - case CTC_PROTO_S390: - case CTC_PROTO_OS390: - check_len = block_len + 2; - break; - default: - check_len = block_len; - break; - } - if ((len < block_len) || (len > check_len)) { - ctc_pr_debug("%s: got block length %d != rx length %d\n", - dev->name, block_len, len); -#ifdef DEBUG - ctc_dump_skb(skb, 0); -#endif - *((__u16 *) skb->data) = len; - privptr->stats.rx_dropped++; - privptr->stats.rx_length_errors++; - goto again; - } - block_len -= 2; - if (block_len > 0) { - *((__u16 *) skb->data) = block_len; - ctc_unpack_skb(ch, skb); - } - again: - skb->data = ch->trans_skb_data; - skb_reset_tail_pointer(skb); - skb->len = 0; - if (ctc_checkalloc_buffer(ch, 1)) - return; - ch->ccw[1].count = ch->max_bufsize; - rc = ccw_device_start(ch->cdev, &ch->ccw[0], (unsigned long) ch, 0xff, 0); - if (rc != 0) - ccw_check_return_code(ch, rc, "normal RX"); -} - -static void ch_action_rxidle(fsm_instance * fi, int event, void *arg); - -/** - * Initialize connection by sending a __u16 of value 0. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_firstio(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - int rc; - - DBF_TEXT(trace, 4, __FUNCTION__); - - if (fsm_getstate(fi) == CH_STATE_TXIDLE) - ctc_pr_debug("%s: remote side issued READ?, init ...\n", ch->id); - fsm_deltimer(&ch->timer); - if (ctc_checkalloc_buffer(ch, 1)) - return; - if ((fsm_getstate(fi) == CH_STATE_SETUPWAIT) && - (ch->protocol == CTC_PROTO_OS390)) { - /* OS/390 resp. z/OS */ - if (CHANNEL_DIRECTION(ch->flags) == READ) { - *((__u16 *) ch->trans_skb->data) = CTC_INITIAL_BLOCKLEN; - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, - CH_EVENT_TIMER, ch); - ch_action_rxidle(fi, event, arg); - } else { - struct net_device *dev = ch->netdev; - fsm_newstate(fi, CH_STATE_TXIDLE); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXUP, dev); - } - return; - } - - /** - * Don't setup a timer for receiving the initial RX frame - * if in compatibility mode, since VM TCP delays the initial - * frame until it has some data to send. - */ - if ((CHANNEL_DIRECTION(ch->flags) == WRITE) || - (ch->protocol != CTC_PROTO_S390)) - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - - *((__u16 *) ch->trans_skb->data) = CTC_INITIAL_BLOCKLEN; - ch->ccw[1].count = 2; /* Transfer only length */ - - fsm_newstate(fi, (CHANNEL_DIRECTION(ch->flags) == READ) - ? CH_STATE_RXINIT : CH_STATE_TXINIT); - rc = ccw_device_start(ch->cdev, &ch->ccw[0], (unsigned long) ch, 0xff, 0); - if (rc != 0) { - fsm_deltimer(&ch->timer); - fsm_newstate(fi, CH_STATE_SETUPWAIT); - ccw_check_return_code(ch, rc, "init IO"); - } - /** - * If in compatibility mode since we don't setup a timer, we - * also signal RX channel up immediately. This enables us - * to send packets early which in turn usually triggers some - * reply from VM TCP which brings up the RX channel to it's - * final state. - */ - if ((CHANNEL_DIRECTION(ch->flags) == READ) && - (ch->protocol == CTC_PROTO_S390)) { - struct net_device *dev = ch->netdev; - fsm_event(((struct ctc_priv *) dev->priv)->fsm, DEV_EVENT_RXUP, - dev); - } -} - -/** - * Got initial data, check it. If OK, - * notify device statemachine that we are up and - * running. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_rxidle(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - __u16 buflen; - int rc; - - DBF_TEXT(trace, 4, __FUNCTION__); - fsm_deltimer(&ch->timer); - buflen = *((__u16 *) ch->trans_skb->data); -#ifdef DEBUG - ctc_pr_debug("%s: Initial RX count %d\n", dev->name, buflen); -#endif - if (buflen >= CTC_INITIAL_BLOCKLEN) { - if (ctc_checkalloc_buffer(ch, 1)) - return; - ch->ccw[1].count = ch->max_bufsize; - fsm_newstate(fi, CH_STATE_RXIDLE); - rc = ccw_device_start(ch->cdev, &ch->ccw[0], - (unsigned long) ch, 0xff, 0); - if (rc != 0) { - fsm_newstate(fi, CH_STATE_RXINIT); - ccw_check_return_code(ch, rc, "initial RX"); - } else - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_RXUP, dev); - } else { - ctc_pr_debug("%s: Initial RX count %d not %d\n", - dev->name, buflen, CTC_INITIAL_BLOCKLEN); - ch_action_firstio(fi, event, arg); - } -} - -/** - * Set channel into extended mode. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_setmode(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - int rc; - unsigned long saveflags; - - DBF_TEXT(trace, 4, __FUNCTION__); - fsm_deltimer(&ch->timer); - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - fsm_newstate(fi, CH_STATE_SETUPWAIT); - saveflags = 0; /* avoids compiler warning with - spin_unlock_irqrestore */ - if (event == CH_EVENT_TIMER) // only for timer not yet locked - spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); - rc = ccw_device_start(ch->cdev, &ch->ccw[6], (unsigned long) ch, 0xff, 0); - if (event == CH_EVENT_TIMER) - spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); - if (rc != 0) { - fsm_deltimer(&ch->timer); - fsm_newstate(fi, CH_STATE_STARTWAIT); - ccw_check_return_code(ch, rc, "set Mode"); - } else - ch->retry = 0; -} - -/** - * Setup channel. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_start(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - unsigned long saveflags; - int rc; - struct net_device *dev; - - DBF_TEXT(trace, 4, __FUNCTION__); - if (ch == NULL) { - ctc_pr_warn("ch_action_start ch=NULL\n"); - return; - } - if (ch->netdev == NULL) { - ctc_pr_warn("ch_action_start dev=NULL, id=%s\n", ch->id); - return; - } - dev = ch->netdev; - -#ifdef DEBUG - ctc_pr_debug("%s: %s channel start\n", dev->name, - (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); -#endif - - if (ch->trans_skb != NULL) { - clear_normalized_cda(&ch->ccw[1]); - dev_kfree_skb(ch->trans_skb); - ch->trans_skb = NULL; - } - if (CHANNEL_DIRECTION(ch->flags) == READ) { - ch->ccw[1].cmd_code = CCW_CMD_READ; - ch->ccw[1].flags = CCW_FLAG_SLI; - ch->ccw[1].count = 0; - } else { - ch->ccw[1].cmd_code = CCW_CMD_WRITE; - ch->ccw[1].flags = CCW_FLAG_SLI | CCW_FLAG_CC; - ch->ccw[1].count = 0; - } - if (ctc_checkalloc_buffer(ch, 0)) { - ctc_pr_notice( - "%s: Could not allocate %s trans_skb, delaying " - "allocation until first transfer\n", - dev->name, - (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); - } - - ch->ccw[0].cmd_code = CCW_CMD_PREPARE; - ch->ccw[0].flags = CCW_FLAG_SLI | CCW_FLAG_CC; - ch->ccw[0].count = 0; - ch->ccw[0].cda = 0; - ch->ccw[2].cmd_code = CCW_CMD_NOOP; /* jointed CE + DE */ - ch->ccw[2].flags = CCW_FLAG_SLI; - ch->ccw[2].count = 0; - ch->ccw[2].cda = 0; - memcpy(&ch->ccw[3], &ch->ccw[0], sizeof (struct ccw1) * 3); - ch->ccw[4].cda = 0; - ch->ccw[4].flags &= ~CCW_FLAG_IDA; - - fsm_newstate(fi, CH_STATE_STARTWAIT); - fsm_addtimer(&ch->timer, 1000, CH_EVENT_TIMER, ch); - spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); - rc = ccw_device_halt(ch->cdev, (unsigned long) ch); - spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); - if (rc != 0) { - if (rc != -EBUSY) - fsm_deltimer(&ch->timer); - ccw_check_return_code(ch, rc, "initial HaltIO"); - } -#ifdef DEBUG - ctc_pr_debug("ctc: %s(): leaving\n", __func__); -#endif -} - -/** - * Shutdown a channel. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_haltio(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - unsigned long saveflags; - int rc; - int oldstate; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_deltimer(&ch->timer); - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - saveflags = 0; /* avoids comp warning with - spin_unlock_irqrestore */ - if (event == CH_EVENT_STOP) // only for STOP not yet locked - spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); - oldstate = fsm_getstate(fi); - fsm_newstate(fi, CH_STATE_TERM); - rc = ccw_device_halt(ch->cdev, (unsigned long) ch); - if (event == CH_EVENT_STOP) - spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); - if (rc != 0) { - if (rc != -EBUSY) { - fsm_deltimer(&ch->timer); - fsm_newstate(fi, oldstate); - } - ccw_check_return_code(ch, rc, "HaltIO in ch_action_haltio"); - } -} - -/** - * A channel has successfully been halted. - * Cleanup it's queue and notify interface statemachine. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_stopped(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_deltimer(&ch->timer); - fsm_newstate(fi, CH_STATE_STOPPED); - if (ch->trans_skb != NULL) { - clear_normalized_cda(&ch->ccw[1]); - dev_kfree_skb(ch->trans_skb); - ch->trans_skb = NULL; - } - if (CHANNEL_DIRECTION(ch->flags) == READ) { - skb_queue_purge(&ch->io_queue); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_RXDOWN, dev); - } else { - ctc_purge_skb_queue(&ch->io_queue); - spin_lock(&ch->collect_lock); - ctc_purge_skb_queue(&ch->collect_queue); - ch->collect_len = 0; - spin_unlock(&ch->collect_lock); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - } -} - -/** - * A stop command from device statemachine arrived and we are in - * not operational mode. Set state to stopped. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_stop(fsm_instance * fi, int event, void *arg) -{ - fsm_newstate(fi, CH_STATE_STOPPED); -} - -/** - * A machine check for no path, not operational status or gone device has - * happened. - * Cleanup queue and notify interface statemachine. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_fail(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_deltimer(&ch->timer); - fsm_newstate(fi, CH_STATE_NOTOP); - if (CHANNEL_DIRECTION(ch->flags) == READ) { - skb_queue_purge(&ch->io_queue); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_RXDOWN, dev); - } else { - ctc_purge_skb_queue(&ch->io_queue); - spin_lock(&ch->collect_lock); - ctc_purge_skb_queue(&ch->collect_queue); - ch->collect_len = 0; - spin_unlock(&ch->collect_lock); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - } -} - -/** - * Handle error during setup of channel. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_setuperr(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(setup, 3, __FUNCTION__); - /** - * Special case: Got UC_RCRESET on setmode. - * This means that remote side isn't setup. In this case - * simply retry after some 10 secs... - */ - if ((fsm_getstate(fi) == CH_STATE_SETUPWAIT) && - ((event == CH_EVENT_UC_RCRESET) || - (event == CH_EVENT_UC_RSRESET))) { - fsm_newstate(fi, CH_STATE_STARTRETRY); - fsm_deltimer(&ch->timer); - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - if (CHANNEL_DIRECTION(ch->flags) == READ) { - int rc = ccw_device_halt(ch->cdev, (unsigned long) ch); - if (rc != 0) - ccw_check_return_code( - ch, rc, "HaltIO in ch_action_setuperr"); - } - return; - } - - ctc_pr_debug("%s: Error %s during %s channel setup state=%s\n", - dev->name, ch_event_names[event], - (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX", - fsm_getstate_str(fi)); - if (CHANNEL_DIRECTION(ch->flags) == READ) { - fsm_newstate(fi, CH_STATE_RXERR); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_RXDOWN, dev); - } else { - fsm_newstate(fi, CH_STATE_TXERR); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - } -} - -/** - * Restart a channel after an error. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_restart(fsm_instance * fi, int event, void *arg) -{ - unsigned long saveflags; - int oldstate; - int rc; - - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_deltimer(&ch->timer); - ctc_pr_debug("%s: %s channel restart\n", dev->name, - (CHANNEL_DIRECTION(ch->flags) == READ) ? "RX" : "TX"); - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - oldstate = fsm_getstate(fi); - fsm_newstate(fi, CH_STATE_STARTWAIT); - saveflags = 0; /* avoids compiler warning with - spin_unlock_irqrestore */ - if (event == CH_EVENT_TIMER) // only for timer not yet locked - spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); - rc = ccw_device_halt(ch->cdev, (unsigned long) ch); - if (event == CH_EVENT_TIMER) - spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); - if (rc != 0) { - if (rc != -EBUSY) { - fsm_deltimer(&ch->timer); - fsm_newstate(fi, oldstate); - } - ccw_check_return_code(ch, rc, "HaltIO in ch_action_restart"); - } -} - -/** - * Handle error during RX initial handshake (exchange of - * 0-length block header) - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_rxiniterr(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(setup, 3, __FUNCTION__); - if (event == CH_EVENT_TIMER) { - fsm_deltimer(&ch->timer); - ctc_pr_debug("%s: Timeout during RX init handshake\n", dev->name); - if (ch->retry++ < 3) - ch_action_restart(fi, event, arg); - else { - fsm_newstate(fi, CH_STATE_RXERR); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_RXDOWN, dev); - } - } else - ctc_pr_warn("%s: Error during RX init handshake\n", dev->name); -} - -/** - * Notify device statemachine if we gave up initialization - * of RX channel. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_rxinitfail(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(setup, 3, __FUNCTION__); - fsm_newstate(fi, CH_STATE_RXERR); - ctc_pr_warn("%s: RX initialization failed\n", dev->name); - ctc_pr_warn("%s: RX <-> RX connection detected\n", dev->name); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, DEV_EVENT_RXDOWN, dev); -} - -/** - * Handle RX Unit check remote reset (remote disconnected) - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_rxdisc(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct channel *ch2; - struct net_device *dev = ch->netdev; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_deltimer(&ch->timer); - ctc_pr_debug("%s: Got remote disconnect, re-initializing ...\n", - dev->name); - - /** - * Notify device statemachine - */ - fsm_event(((struct ctc_priv *) dev->priv)->fsm, DEV_EVENT_RXDOWN, dev); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, DEV_EVENT_TXDOWN, dev); - - fsm_newstate(fi, CH_STATE_DTERM); - ch2 = ((struct ctc_priv *) dev->priv)->channel[WRITE]; - fsm_newstate(ch2->fsm, CH_STATE_DTERM); - - ccw_device_halt(ch->cdev, (unsigned long) ch); - ccw_device_halt(ch2->cdev, (unsigned long) ch2); -} - -/** - * Handle error during TX channel initialization. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_txiniterr(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(setup, 2, __FUNCTION__); - if (event == CH_EVENT_TIMER) { - fsm_deltimer(&ch->timer); - ctc_pr_debug("%s: Timeout during TX init handshake\n", dev->name); - if (ch->retry++ < 3) - ch_action_restart(fi, event, arg); - else { - fsm_newstate(fi, CH_STATE_TXERR); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - } - } else - ctc_pr_warn("%s: Error during TX init handshake\n", dev->name); -} - -/** - * Handle TX timeout by retrying operation. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_txretry(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - unsigned long saveflags; - - DBF_TEXT(trace, 4, __FUNCTION__); - fsm_deltimer(&ch->timer); - if (ch->retry++ > 3) { - ctc_pr_debug("%s: TX retry failed, restarting channel\n", - dev->name); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - ch_action_restart(fi, event, arg); - } else { - struct sk_buff *skb; - - ctc_pr_debug("%s: TX retry %d\n", dev->name, ch->retry); - if ((skb = skb_peek(&ch->io_queue))) { - int rc = 0; - - clear_normalized_cda(&ch->ccw[4]); - ch->ccw[4].count = skb->len; - if (set_normalized_cda(&ch->ccw[4], skb->data)) { - ctc_pr_debug( - "%s: IDAL alloc failed, chan restart\n", - dev->name); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - ch_action_restart(fi, event, arg); - return; - } - fsm_addtimer(&ch->timer, 1000, CH_EVENT_TIMER, ch); - saveflags = 0; /* avoids compiler warning with - spin_unlock_irqrestore */ - if (event == CH_EVENT_TIMER) // only for TIMER not yet locked - spin_lock_irqsave(get_ccwdev_lock(ch->cdev), - saveflags); - rc = ccw_device_start(ch->cdev, &ch->ccw[3], - (unsigned long) ch, 0xff, 0); - if (event == CH_EVENT_TIMER) - spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), - saveflags); - if (rc != 0) { - fsm_deltimer(&ch->timer); - ccw_check_return_code(ch, rc, "TX in ch_action_txretry"); - ctc_purge_skb_queue(&ch->io_queue); - } - } - } - -} - -/** - * Handle fatal errors during an I/O command. - * - * @param fi An instance of a channel statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from channel * upon call. - */ -static void -ch_action_iofatal(fsm_instance * fi, int event, void *arg) -{ - struct channel *ch = (struct channel *) arg; - struct net_device *dev = ch->netdev; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_deltimer(&ch->timer); - if (CHANNEL_DIRECTION(ch->flags) == READ) { - ctc_pr_debug("%s: RX I/O error\n", dev->name); - fsm_newstate(fi, CH_STATE_RXERR); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_RXDOWN, dev); - } else { - ctc_pr_debug("%s: TX I/O error\n", dev->name); - fsm_newstate(fi, CH_STATE_TXERR); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, - DEV_EVENT_TXDOWN, dev); - } -} - -static void -ch_action_reinit(fsm_instance *fi, int event, void *arg) -{ - struct channel *ch = (struct channel *)arg; - struct net_device *dev = ch->netdev; - struct ctc_priv *privptr = dev->priv; - - DBF_TEXT(trace, 4, __FUNCTION__); - ch_action_iofatal(fi, event, arg); - fsm_addtimer(&privptr->restart_timer, 1000, DEV_EVENT_RESTART, dev); -} - -/** - * The statemachine for a channel. - */ -static const fsm_node ch_fsm[] = { - {CH_STATE_STOPPED, CH_EVENT_STOP, fsm_action_nop }, - {CH_STATE_STOPPED, CH_EVENT_START, ch_action_start }, - {CH_STATE_STOPPED, CH_EVENT_FINSTAT, fsm_action_nop }, - {CH_STATE_STOPPED, CH_EVENT_MC_FAIL, fsm_action_nop }, - - {CH_STATE_NOTOP, CH_EVENT_STOP, ch_action_stop }, - {CH_STATE_NOTOP, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_NOTOP, CH_EVENT_FINSTAT, fsm_action_nop }, - {CH_STATE_NOTOP, CH_EVENT_MC_FAIL, fsm_action_nop }, - {CH_STATE_NOTOP, CH_EVENT_MC_GOOD, ch_action_start }, - - {CH_STATE_STARTWAIT, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_STARTWAIT, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_STARTWAIT, CH_EVENT_FINSTAT, ch_action_setmode }, - {CH_STATE_STARTWAIT, CH_EVENT_TIMER, ch_action_setuperr }, - {CH_STATE_STARTWAIT, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_STARTWAIT, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_STARTWAIT, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_STARTRETRY, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_STARTRETRY, CH_EVENT_TIMER, ch_action_setmode }, - {CH_STATE_STARTRETRY, CH_EVENT_FINSTAT, fsm_action_nop }, - {CH_STATE_STARTRETRY, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_SETUPWAIT, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_SETUPWAIT, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_SETUPWAIT, CH_EVENT_FINSTAT, ch_action_firstio }, - {CH_STATE_SETUPWAIT, CH_EVENT_UC_RCRESET, ch_action_setuperr }, - {CH_STATE_SETUPWAIT, CH_EVENT_UC_RSRESET, ch_action_setuperr }, - {CH_STATE_SETUPWAIT, CH_EVENT_TIMER, ch_action_setmode }, - {CH_STATE_SETUPWAIT, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_SETUPWAIT, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_SETUPWAIT, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_RXINIT, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_RXINIT, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_RXINIT, CH_EVENT_FINSTAT, ch_action_rxidle }, - {CH_STATE_RXINIT, CH_EVENT_UC_RCRESET, ch_action_rxiniterr }, - {CH_STATE_RXINIT, CH_EVENT_UC_RSRESET, ch_action_rxiniterr }, - {CH_STATE_RXINIT, CH_EVENT_TIMER, ch_action_rxiniterr }, - {CH_STATE_RXINIT, CH_EVENT_ATTNBUSY, ch_action_rxinitfail }, - {CH_STATE_RXINIT, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_RXINIT, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_RXINIT, CH_EVENT_UC_ZERO, ch_action_firstio }, - {CH_STATE_RXINIT, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_RXIDLE, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_RXIDLE, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_RXIDLE, CH_EVENT_FINSTAT, ch_action_rx }, - {CH_STATE_RXIDLE, CH_EVENT_UC_RCRESET, ch_action_rxdisc }, -// {CH_STATE_RXIDLE, CH_EVENT_UC_RSRESET, ch_action_rxretry }, - {CH_STATE_RXIDLE, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_RXIDLE, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_RXIDLE, CH_EVENT_MC_FAIL, ch_action_fail }, - {CH_STATE_RXIDLE, CH_EVENT_UC_ZERO, ch_action_rx }, - - {CH_STATE_TXINIT, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_TXINIT, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_TXINIT, CH_EVENT_FINSTAT, ch_action_txidle }, - {CH_STATE_TXINIT, CH_EVENT_UC_RCRESET, ch_action_txiniterr }, - {CH_STATE_TXINIT, CH_EVENT_UC_RSRESET, ch_action_txiniterr }, - {CH_STATE_TXINIT, CH_EVENT_TIMER, ch_action_txiniterr }, - {CH_STATE_TXINIT, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_TXINIT, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_TXINIT, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_TXIDLE, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_TXIDLE, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_TXIDLE, CH_EVENT_FINSTAT, ch_action_firstio }, - {CH_STATE_TXIDLE, CH_EVENT_UC_RCRESET, fsm_action_nop }, - {CH_STATE_TXIDLE, CH_EVENT_UC_RSRESET, fsm_action_nop }, - {CH_STATE_TXIDLE, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_TXIDLE, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_TXIDLE, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_TERM, CH_EVENT_STOP, fsm_action_nop }, - {CH_STATE_TERM, CH_EVENT_START, ch_action_restart }, - {CH_STATE_TERM, CH_EVENT_FINSTAT, ch_action_stopped }, - {CH_STATE_TERM, CH_EVENT_UC_RCRESET, fsm_action_nop }, - {CH_STATE_TERM, CH_EVENT_UC_RSRESET, fsm_action_nop }, - {CH_STATE_TERM, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_DTERM, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_DTERM, CH_EVENT_START, ch_action_restart }, - {CH_STATE_DTERM, CH_EVENT_FINSTAT, ch_action_setmode }, - {CH_STATE_DTERM, CH_EVENT_UC_RCRESET, fsm_action_nop }, - {CH_STATE_DTERM, CH_EVENT_UC_RSRESET, fsm_action_nop }, - {CH_STATE_DTERM, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_TX, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_TX, CH_EVENT_START, fsm_action_nop }, - {CH_STATE_TX, CH_EVENT_FINSTAT, ch_action_txdone }, - {CH_STATE_TX, CH_EVENT_UC_RCRESET, ch_action_txretry }, - {CH_STATE_TX, CH_EVENT_UC_RSRESET, ch_action_txretry }, - {CH_STATE_TX, CH_EVENT_TIMER, ch_action_txretry }, - {CH_STATE_TX, CH_EVENT_IO_ENODEV, ch_action_iofatal }, - {CH_STATE_TX, CH_EVENT_IO_EIO, ch_action_reinit }, - {CH_STATE_TX, CH_EVENT_MC_FAIL, ch_action_fail }, - - {CH_STATE_RXERR, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_TXERR, CH_EVENT_STOP, ch_action_haltio }, - {CH_STATE_TXERR, CH_EVENT_MC_FAIL, ch_action_fail }, - {CH_STATE_RXERR, CH_EVENT_MC_FAIL, ch_action_fail }, -}; - -static const int CH_FSM_LEN = sizeof (ch_fsm) / sizeof (fsm_node); - -/** - * Functions related to setup and device detection. - *****************************************************************************/ - -static inline int -less_than(char *id1, char *id2) -{ - int dev1, dev2, i; - - for (i = 0; i < 5; i++) { - id1++; - id2++; - } - dev1 = simple_strtoul(id1, &id1, 16); - dev2 = simple_strtoul(id2, &id2, 16); - - return (dev1 < dev2); -} - -/** - * Add a new channel to the list of channels. - * Keeps the channel list sorted. - * - * @param cdev The ccw_device to be added. - * @param type The type class of the new channel. - * - * @return 0 on success, !0 on error. - */ -static int -add_channel(struct ccw_device *cdev, enum channel_types type) -{ - struct channel **c = &channels; - struct channel *ch; - - DBF_TEXT(trace, 2, __FUNCTION__); - ch = kzalloc(sizeof(struct channel), GFP_KERNEL); - if (!ch) { - ctc_pr_warn("ctc: Out of memory in add_channel\n"); - return -1; - } - /* assure all flags and counters are reset */ - ch->ccw = kzalloc(8 * sizeof(struct ccw1), GFP_KERNEL | GFP_DMA); - if (!ch->ccw) { - kfree(ch); - ctc_pr_warn("ctc: Out of memory in add_channel\n"); - return -1; - } - - - /** - * "static" ccws are used in the following way: - * - * ccw[0..2] (Channel program for generic I/O): - * 0: prepare - * 1: read or write (depending on direction) with fixed - * buffer (idal allocated once when buffer is allocated) - * 2: nop - * ccw[3..5] (Channel program for direct write of packets) - * 3: prepare - * 4: write (idal allocated on every write). - * 5: nop - * ccw[6..7] (Channel program for initial channel setup): - * 6: set extended mode - * 7: nop - * - * ch->ccw[0..5] are initialized in ch_action_start because - * the channel's direction is yet unknown here. - */ - ch->ccw[6].cmd_code = CCW_CMD_SET_EXTENDED; - ch->ccw[6].flags = CCW_FLAG_SLI; - - ch->ccw[7].cmd_code = CCW_CMD_NOOP; - ch->ccw[7].flags = CCW_FLAG_SLI; - - ch->cdev = cdev; - snprintf(ch->id, CTC_ID_SIZE, "ch-%s", cdev->dev.bus_id); - ch->type = type; - ch->fsm = init_fsm(ch->id, ch_state_names, - ch_event_names, NR_CH_STATES, NR_CH_EVENTS, - ch_fsm, CH_FSM_LEN, GFP_KERNEL); - if (ch->fsm == NULL) { - ctc_pr_warn("ctc: Could not create FSM in add_channel\n"); - kfree(ch->ccw); - kfree(ch); - return -1; - } - fsm_newstate(ch->fsm, CH_STATE_IDLE); - ch->irb = kzalloc(sizeof(struct irb), GFP_KERNEL); - if (!ch->irb) { - ctc_pr_warn("ctc: Out of memory in add_channel\n"); - kfree_fsm(ch->fsm); - kfree(ch->ccw); - kfree(ch); - return -1; - } - while (*c && less_than((*c)->id, ch->id)) - c = &(*c)->next; - if (*c && (!strncmp((*c)->id, ch->id, CTC_ID_SIZE))) { - ctc_pr_debug( - "ctc: add_channel: device %s already in list, " - "using old entry\n", (*c)->id); - kfree(ch->irb); - kfree_fsm(ch->fsm); - kfree(ch->ccw); - kfree(ch); - return 0; - } - - spin_lock_init(&ch->collect_lock); - - fsm_settimer(ch->fsm, &ch->timer); - skb_queue_head_init(&ch->io_queue); - skb_queue_head_init(&ch->collect_queue); - ch->next = *c; - *c = ch; - return 0; -} - -/** - * Release a specific channel in the channel list. - * - * @param ch Pointer to channel struct to be released. - */ -static void -channel_free(struct channel *ch) -{ - ch->flags &= ~CHANNEL_FLAGS_INUSE; - fsm_newstate(ch->fsm, CH_STATE_IDLE); -} - -/** - * Remove a specific channel in the channel list. - * - * @param ch Pointer to channel struct to be released. - */ -static void -channel_remove(struct channel *ch) -{ - struct channel **c = &channels; - - DBF_TEXT(trace, 2, __FUNCTION__); - if (ch == NULL) - return; - - channel_free(ch); - while (*c) { - if (*c == ch) { - *c = ch->next; - fsm_deltimer(&ch->timer); - kfree_fsm(ch->fsm); - clear_normalized_cda(&ch->ccw[4]); - if (ch->trans_skb != NULL) { - clear_normalized_cda(&ch->ccw[1]); - dev_kfree_skb(ch->trans_skb); - } - kfree(ch->ccw); - kfree(ch->irb); - kfree(ch); - return; - } - c = &((*c)->next); - } -} - -/** - * Get a specific channel from the channel list. - * - * @param type Type of channel we are interested in. - * @param id Id of channel we are interested in. - * @param direction Direction we want to use this channel for. - * - * @return Pointer to a channel or NULL if no matching channel available. - */ -static struct channel -* -channel_get(enum channel_types type, char *id, int direction) -{ - struct channel *ch = channels; - - DBF_TEXT(trace, 3, __FUNCTION__); -#ifdef DEBUG - ctc_pr_debug("ctc: %s(): searching for ch with id %s and type %d\n", - __func__, id, type); -#endif - - while (ch && ((strncmp(ch->id, id, CTC_ID_SIZE)) || (ch->type != type))) { -#ifdef DEBUG - ctc_pr_debug("ctc: %s(): ch=0x%p (id=%s, type=%d\n", - __func__, ch, ch->id, ch->type); -#endif - ch = ch->next; - } -#ifdef DEBUG - ctc_pr_debug("ctc: %s(): ch=0x%pq (id=%s, type=%d\n", - __func__, ch, ch->id, ch->type); -#endif - if (!ch) { - ctc_pr_warn("ctc: %s(): channel with id %s " - "and type %d not found in channel list\n", - __func__, id, type); - } else { - if (ch->flags & CHANNEL_FLAGS_INUSE) - ch = NULL; - else { - ch->flags |= CHANNEL_FLAGS_INUSE; - ch->flags &= ~CHANNEL_FLAGS_RWMASK; - ch->flags |= (direction == WRITE) - ? CHANNEL_FLAGS_WRITE : CHANNEL_FLAGS_READ; - fsm_newstate(ch->fsm, CH_STATE_STOPPED); - } - } - return ch; -} - -/** - * Return the channel type by name. - * - * @param name Name of network interface. - * - * @return Type class of channel to be used for that interface. - */ -static enum channel_types inline -extract_channel_media(char *name) -{ - enum channel_types ret = channel_type_unknown; - - if (name != NULL) { - if (strncmp(name, "ctc", 3) == 0) - ret = channel_type_parallel; - if (strncmp(name, "escon", 5) == 0) - ret = channel_type_escon; - } - return ret; -} - -static long -__ctc_check_irb_error(struct ccw_device *cdev, struct irb *irb) -{ - if (!IS_ERR(irb)) - return 0; - - switch (PTR_ERR(irb)) { - case -EIO: - ctc_pr_warn("i/o-error on device %s\n", cdev->dev.bus_id); -// CTC_DBF_TEXT(trace, 2, "ckirberr"); -// CTC_DBF_TEXT_(trace, 2, " rc%d", -EIO); - break; - case -ETIMEDOUT: - ctc_pr_warn("timeout on device %s\n", cdev->dev.bus_id); -// CTC_DBF_TEXT(trace, 2, "ckirberr"); -// CTC_DBF_TEXT_(trace, 2, " rc%d", -ETIMEDOUT); - break; - default: - ctc_pr_warn("unknown error %ld on device %s\n", PTR_ERR(irb), - cdev->dev.bus_id); -// CTC_DBF_TEXT(trace, 2, "ckirberr"); -// CTC_DBF_TEXT(trace, 2, " rc???"); - } - return PTR_ERR(irb); -} - -/** - * Main IRQ handler. - * - * @param cdev The ccw_device the interrupt is for. - * @param intparm interruption parameter. - * @param irb interruption response block. - */ -static void -ctc_irq_handler(struct ccw_device *cdev, unsigned long intparm, struct irb *irb) -{ - struct channel *ch; - struct net_device *dev; - struct ctc_priv *priv; - - DBF_TEXT(trace, 5, __FUNCTION__); - if (__ctc_check_irb_error(cdev, irb)) - return; - - /* Check for unsolicited interrupts. */ - if (!cdev->dev.driver_data) { - ctc_pr_warn("ctc: Got unsolicited irq: %s c-%02x d-%02x\n", - cdev->dev.bus_id, irb->scsw.cstat, - irb->scsw.dstat); - return; - } - - priv = ((struct ccwgroup_device *)cdev->dev.driver_data) - ->dev.driver_data; - - /* Try to extract channel from driver data. */ - if (priv->channel[READ]->cdev == cdev) - ch = priv->channel[READ]; - else if (priv->channel[WRITE]->cdev == cdev) - ch = priv->channel[WRITE]; - else { - ctc_pr_err("ctc: Can't determine channel for interrupt, " - "device %s\n", cdev->dev.bus_id); - return; - } - - dev = (struct net_device *) (ch->netdev); - if (dev == NULL) { - ctc_pr_crit("ctc: ctc_irq_handler dev=NULL bus_id=%s, ch=0x%p\n", - cdev->dev.bus_id, ch); - return; - } - -#ifdef DEBUG - ctc_pr_debug("%s: interrupt for device: %s received c-%02x d-%02x\n", - dev->name, ch->id, irb->scsw.cstat, irb->scsw.dstat); -#endif - - /* Copy interruption response block. */ - memcpy(ch->irb, irb, sizeof(struct irb)); - - /* Check for good subchannel return code, otherwise error message */ - if (ch->irb->scsw.cstat) { - fsm_event(ch->fsm, CH_EVENT_SC_UNKNOWN, ch); - ctc_pr_warn("%s: subchannel check for device: %s - %02x %02x\n", - dev->name, ch->id, ch->irb->scsw.cstat, - ch->irb->scsw.dstat); - return; - } - - /* Check the reason-code of a unit check */ - if (ch->irb->scsw.dstat & DEV_STAT_UNIT_CHECK) { - ccw_unit_check(ch, ch->irb->ecw[0]); - return; - } - if (ch->irb->scsw.dstat & DEV_STAT_BUSY) { - if (ch->irb->scsw.dstat & DEV_STAT_ATTENTION) - fsm_event(ch->fsm, CH_EVENT_ATTNBUSY, ch); - else - fsm_event(ch->fsm, CH_EVENT_BUSY, ch); - return; - } - if (ch->irb->scsw.dstat & DEV_STAT_ATTENTION) { - fsm_event(ch->fsm, CH_EVENT_ATTN, ch); - return; - } - if ((ch->irb->scsw.stctl & SCSW_STCTL_SEC_STATUS) || - (ch->irb->scsw.stctl == SCSW_STCTL_STATUS_PEND) || - (ch->irb->scsw.stctl == - (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND))) - fsm_event(ch->fsm, CH_EVENT_FINSTAT, ch); - else - fsm_event(ch->fsm, CH_EVENT_IRQ, ch); - -} - -/** - * Actions for interface - statemachine. - *****************************************************************************/ - -/** - * Startup channels by sending CH_EVENT_START to each channel. - * - * @param fi An instance of an interface statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from struct net_device * upon call. - */ -static void -dev_action_start(fsm_instance * fi, int event, void *arg) -{ - struct net_device *dev = (struct net_device *) arg; - struct ctc_priv *privptr = dev->priv; - int direction; - - DBF_TEXT(setup, 3, __FUNCTION__); - fsm_deltimer(&privptr->restart_timer); - fsm_newstate(fi, DEV_STATE_STARTWAIT_RXTX); - for (direction = READ; direction <= WRITE; direction++) { - struct channel *ch = privptr->channel[direction]; - fsm_event(ch->fsm, CH_EVENT_START, ch); - } -} - -/** - * Shutdown channels by sending CH_EVENT_STOP to each channel. - * - * @param fi An instance of an interface statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from struct net_device * upon call. - */ -static void -dev_action_stop(fsm_instance * fi, int event, void *arg) -{ - struct net_device *dev = (struct net_device *) arg; - struct ctc_priv *privptr = dev->priv; - int direction; - - DBF_TEXT(trace, 3, __FUNCTION__); - fsm_newstate(fi, DEV_STATE_STOPWAIT_RXTX); - for (direction = READ; direction <= WRITE; direction++) { - struct channel *ch = privptr->channel[direction]; - fsm_event(ch->fsm, CH_EVENT_STOP, ch); - } -} -static void -dev_action_restart(fsm_instance *fi, int event, void *arg) -{ - struct net_device *dev = (struct net_device *)arg; - struct ctc_priv *privptr = dev->priv; - - DBF_TEXT(trace, 3, __FUNCTION__); - ctc_pr_debug("%s: Restarting\n", dev->name); - dev_action_stop(fi, event, arg); - fsm_event(privptr->fsm, DEV_EVENT_STOP, dev); - fsm_addtimer(&privptr->restart_timer, CTC_TIMEOUT_5SEC, - DEV_EVENT_START, dev); -} - -/** - * Called from channel statemachine - * when a channel is up and running. - * - * @param fi An instance of an interface statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from struct net_device * upon call. - */ -static void -dev_action_chup(fsm_instance * fi, int event, void *arg) -{ - struct net_device *dev = (struct net_device *) arg; - - DBF_TEXT(trace, 3, __FUNCTION__); - switch (fsm_getstate(fi)) { - case DEV_STATE_STARTWAIT_RXTX: - if (event == DEV_EVENT_RXUP) - fsm_newstate(fi, DEV_STATE_STARTWAIT_TX); - else - fsm_newstate(fi, DEV_STATE_STARTWAIT_RX); - break; - case DEV_STATE_STARTWAIT_RX: - if (event == DEV_EVENT_RXUP) { - fsm_newstate(fi, DEV_STATE_RUNNING); - ctc_pr_info("%s: connected with remote side\n", - dev->name); - ctc_clear_busy(dev); - } - break; - case DEV_STATE_STARTWAIT_TX: - if (event == DEV_EVENT_TXUP) { - fsm_newstate(fi, DEV_STATE_RUNNING); - ctc_pr_info("%s: connected with remote side\n", - dev->name); - ctc_clear_busy(dev); - } - break; - case DEV_STATE_STOPWAIT_TX: - if (event == DEV_EVENT_RXUP) - fsm_newstate(fi, DEV_STATE_STOPWAIT_RXTX); - break; - case DEV_STATE_STOPWAIT_RX: - if (event == DEV_EVENT_TXUP) - fsm_newstate(fi, DEV_STATE_STOPWAIT_RXTX); - break; - } -} - -/** - * Called from channel statemachine - * when a channel has been shutdown. - * - * @param fi An instance of an interface statemachine. - * @param event The event, just happened. - * @param arg Generic pointer, casted from struct net_device * upon call. - */ -static void -dev_action_chdown(fsm_instance * fi, int event, void *arg) -{ - - DBF_TEXT(trace, 3, __FUNCTION__); - switch (fsm_getstate(fi)) { - case DEV_STATE_RUNNING: - if (event == DEV_EVENT_TXDOWN) - fsm_newstate(fi, DEV_STATE_STARTWAIT_TX); - else - fsm_newstate(fi, DEV_STATE_STARTWAIT_RX); - break; - case DEV_STATE_STARTWAIT_RX: - if (event == DEV_EVENT_TXDOWN) - fsm_newstate(fi, DEV_STATE_STARTWAIT_RXTX); - break; - case DEV_STATE_STARTWAIT_TX: - if (event == DEV_EVENT_RXDOWN) - fsm_newstate(fi, DEV_STATE_STARTWAIT_RXTX); - break; - case DEV_STATE_STOPWAIT_RXTX: - if (event == DEV_EVENT_TXDOWN) - fsm_newstate(fi, DEV_STATE_STOPWAIT_RX); - else - fsm_newstate(fi, DEV_STATE_STOPWAIT_TX); - break; - case DEV_STATE_STOPWAIT_RX: - if (event == DEV_EVENT_RXDOWN) - fsm_newstate(fi, DEV_STATE_STOPPED); - break; - case DEV_STATE_STOPWAIT_TX: - if (event == DEV_EVENT_TXDOWN) - fsm_newstate(fi, DEV_STATE_STOPPED); - break; - } -} - -static const fsm_node dev_fsm[] = { - {DEV_STATE_STOPPED, DEV_EVENT_START, dev_action_start}, - - {DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_START, dev_action_start }, - {DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_RXDOWN, dev_action_chdown }, - {DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_TXDOWN, dev_action_chdown }, - {DEV_STATE_STOPWAIT_RXTX, DEV_EVENT_RESTART, dev_action_restart }, - - {DEV_STATE_STOPWAIT_RX, DEV_EVENT_START, dev_action_start }, - {DEV_STATE_STOPWAIT_RX, DEV_EVENT_RXUP, dev_action_chup }, - {DEV_STATE_STOPWAIT_RX, DEV_EVENT_TXUP, dev_action_chup }, - {DEV_STATE_STOPWAIT_RX, DEV_EVENT_RXDOWN, dev_action_chdown }, - {DEV_STATE_STOPWAIT_RX, DEV_EVENT_RESTART, dev_action_restart }, - - {DEV_STATE_STOPWAIT_TX, DEV_EVENT_START, dev_action_start }, - {DEV_STATE_STOPWAIT_TX, DEV_EVENT_RXUP, dev_action_chup }, - {DEV_STATE_STOPWAIT_TX, DEV_EVENT_TXUP, dev_action_chup }, - {DEV_STATE_STOPWAIT_TX, DEV_EVENT_TXDOWN, dev_action_chdown }, - {DEV_STATE_STOPWAIT_TX, DEV_EVENT_RESTART, dev_action_restart }, - - {DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_STOP, dev_action_stop }, - {DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_RXUP, dev_action_chup }, - {DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_TXUP, dev_action_chup }, - {DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_RXDOWN, dev_action_chdown }, - {DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_TXDOWN, dev_action_chdown }, - {DEV_STATE_STARTWAIT_RXTX, DEV_EVENT_RESTART, dev_action_restart }, - - {DEV_STATE_STARTWAIT_TX, DEV_EVENT_STOP, dev_action_stop }, - {DEV_STATE_STARTWAIT_TX, DEV_EVENT_RXUP, dev_action_chup }, - {DEV_STATE_STARTWAIT_TX, DEV_EVENT_TXUP, dev_action_chup }, - {DEV_STATE_STARTWAIT_TX, DEV_EVENT_RXDOWN, dev_action_chdown }, - {DEV_STATE_STARTWAIT_TX, DEV_EVENT_RESTART, dev_action_restart }, - - {DEV_STATE_STARTWAIT_RX, DEV_EVENT_STOP, dev_action_stop }, - {DEV_STATE_STARTWAIT_RX, DEV_EVENT_RXUP, dev_action_chup }, - {DEV_STATE_STARTWAIT_RX, DEV_EVENT_TXUP, dev_action_chup }, - {DEV_STATE_STARTWAIT_RX, DEV_EVENT_TXDOWN, dev_action_chdown }, - {DEV_STATE_STARTWAIT_RX, DEV_EVENT_RESTART, dev_action_restart }, - - {DEV_STATE_RUNNING, DEV_EVENT_STOP, dev_action_stop }, - {DEV_STATE_RUNNING, DEV_EVENT_RXDOWN, dev_action_chdown }, - {DEV_STATE_RUNNING, DEV_EVENT_TXDOWN, dev_action_chdown }, - {DEV_STATE_RUNNING, DEV_EVENT_TXUP, fsm_action_nop }, - {DEV_STATE_RUNNING, DEV_EVENT_RXUP, fsm_action_nop }, - {DEV_STATE_RUNNING, DEV_EVENT_RESTART, dev_action_restart }, -}; - -static const int DEV_FSM_LEN = sizeof (dev_fsm) / sizeof (fsm_node); - -/** - * Transmit a packet. - * This is a helper function for ctc_tx(). - * - * @param ch Channel to be used for sending. - * @param skb Pointer to struct sk_buff of packet to send. - * The linklevel header has already been set up - * by ctc_tx(). - * - * @return 0 on success, -ERRNO on failure. (Never fails.) - */ -static int -transmit_skb(struct channel *ch, struct sk_buff *skb) -{ - unsigned long saveflags; - struct ll_header header; - int rc = 0; - - DBF_TEXT(trace, 5, __FUNCTION__); - /* we need to acquire the lock for testing the state - * otherwise we can have an IRQ changing the state to - * TXIDLE after the test but before acquiring the lock. - */ - spin_lock_irqsave(&ch->collect_lock, saveflags); - if (fsm_getstate(ch->fsm) != CH_STATE_TXIDLE) { - int l = skb->len + LL_HEADER_LENGTH; - - if (ch->collect_len + l > ch->max_bufsize - 2) { - spin_unlock_irqrestore(&ch->collect_lock, saveflags); - return -EBUSY; - } else { - atomic_inc(&skb->users); - header.length = l; - header.type = skb->protocol; - header.unused = 0; - memcpy(skb_push(skb, LL_HEADER_LENGTH), &header, - LL_HEADER_LENGTH); - skb_queue_tail(&ch->collect_queue, skb); - ch->collect_len += l; - } - spin_unlock_irqrestore(&ch->collect_lock, saveflags); - } else { - __u16 block_len; - int ccw_idx; - struct sk_buff *nskb; - unsigned long hi; - spin_unlock_irqrestore(&ch->collect_lock, saveflags); - /** - * Protect skb against beeing free'd by upper - * layers. - */ - atomic_inc(&skb->users); - ch->prof.txlen += skb->len; - header.length = skb->len + LL_HEADER_LENGTH; - header.type = skb->protocol; - header.unused = 0; - memcpy(skb_push(skb, LL_HEADER_LENGTH), &header, - LL_HEADER_LENGTH); - block_len = skb->len + 2; - *((__u16 *) skb_push(skb, 2)) = block_len; - - /** - * IDAL support in CTC is broken, so we have to - * care about skb's above 2G ourselves. - */ - hi = ((unsigned long)skb_tail_pointer(skb) + - LL_HEADER_LENGTH) >> 31; - if (hi) { - nskb = alloc_skb(skb->len, GFP_ATOMIC | GFP_DMA); - if (!nskb) { - atomic_dec(&skb->users); - skb_pull(skb, LL_HEADER_LENGTH + 2); - ctc_clear_busy(ch->netdev); - return -ENOMEM; - } else { - memcpy(skb_put(nskb, skb->len), - skb->data, skb->len); - atomic_inc(&nskb->users); - atomic_dec(&skb->users); - dev_kfree_skb_irq(skb); - skb = nskb; - } - } - - ch->ccw[4].count = block_len; - if (set_normalized_cda(&ch->ccw[4], skb->data)) { - /** - * idal allocation failed, try via copying to - * trans_skb. trans_skb usually has a pre-allocated - * idal. - */ - if (ctc_checkalloc_buffer(ch, 1)) { - /** - * Remove our header. It gets added - * again on retransmit. - */ - atomic_dec(&skb->users); - skb_pull(skb, LL_HEADER_LENGTH + 2); - ctc_clear_busy(ch->netdev); - return -EBUSY; - } - - skb_reset_tail_pointer(ch->trans_skb); - ch->trans_skb->len = 0; - ch->ccw[1].count = skb->len; - skb_copy_from_linear_data(skb, skb_put(ch->trans_skb, - skb->len), - skb->len); - atomic_dec(&skb->users); - dev_kfree_skb_irq(skb); - ccw_idx = 0; - } else { - skb_queue_tail(&ch->io_queue, skb); - ccw_idx = 3; - } - ch->retry = 0; - fsm_newstate(ch->fsm, CH_STATE_TX); - fsm_addtimer(&ch->timer, CTC_TIMEOUT_5SEC, CH_EVENT_TIMER, ch); - spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags); - ch->prof.send_stamp = current_kernel_time(); - rc = ccw_device_start(ch->cdev, &ch->ccw[ccw_idx], - (unsigned long) ch, 0xff, 0); - spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags); - if (ccw_idx == 3) - ch->prof.doios_single++; - if (rc != 0) { - fsm_deltimer(&ch->timer); - ccw_check_return_code(ch, rc, "single skb TX"); - if (ccw_idx == 3) - skb_dequeue_tail(&ch->io_queue); - /** - * Remove our header. It gets added - * again on retransmit. - */ - skb_pull(skb, LL_HEADER_LENGTH + 2); - } else { - if (ccw_idx == 0) { - struct net_device *dev = ch->netdev; - struct ctc_priv *privptr = dev->priv; - privptr->stats.tx_packets++; - privptr->stats.tx_bytes += - skb->len - LL_HEADER_LENGTH; - } - } - } - - ctc_clear_busy(ch->netdev); - return rc; -} - -/** - * Interface API for upper network layers - *****************************************************************************/ - -/** - * Open an interface. - * Called from generic network layer when ifconfig up is run. - * - * @param dev Pointer to interface struct. - * - * @return 0 on success, -ERRNO on failure. (Never fails.) - */ -static int -ctc_open(struct net_device * dev) -{ - DBF_TEXT(trace, 5, __FUNCTION__); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, DEV_EVENT_START, dev); - return 0; -} - -/** - * Close an interface. - * Called from generic network layer when ifconfig down is run. - * - * @param dev Pointer to interface struct. - * - * @return 0 on success, -ERRNO on failure. (Never fails.) - */ -static int -ctc_close(struct net_device * dev) -{ - DBF_TEXT(trace, 5, __FUNCTION__); - fsm_event(((struct ctc_priv *) dev->priv)->fsm, DEV_EVENT_STOP, dev); - return 0; -} - -/** - * Start transmission of a packet. - * Called from generic network device layer. - * - * @param skb Pointer to buffer containing the packet. - * @param dev Pointer to interface struct. - * - * @return 0 if packet consumed, !0 if packet rejected. - * Note: If we return !0, then the packet is free'd by - * the generic network layer. - */ -static int -ctc_tx(struct sk_buff *skb, struct net_device * dev) -{ - int rc = 0; - struct ctc_priv *privptr = (struct ctc_priv *) dev->priv; - - DBF_TEXT(trace, 5, __FUNCTION__); - /** - * Some sanity checks ... - */ - if (skb == NULL) { - ctc_pr_warn("%s: NULL sk_buff passed\n", dev->name); - privptr->stats.tx_dropped++; - return 0; - } - if (skb_headroom(skb) < (LL_HEADER_LENGTH + 2)) { - ctc_pr_warn("%s: Got sk_buff with head room < %ld bytes\n", - dev->name, LL_HEADER_LENGTH + 2); - dev_kfree_skb(skb); - privptr->stats.tx_dropped++; - return 0; - } - - /** - * If channels are not running, try to restart them - * and throw away packet. - */ - if (fsm_getstate(privptr->fsm) != DEV_STATE_RUNNING) { - fsm_event(privptr->fsm, DEV_EVENT_START, dev); - dev_kfree_skb(skb); - privptr->stats.tx_dropped++; - privptr->stats.tx_errors++; - privptr->stats.tx_carrier_errors++; - return 0; - } - - if (ctc_test_and_set_busy(dev)) - return -EBUSY; - - dev->trans_start = jiffies; - if (transmit_skb(privptr->channel[WRITE], skb) != 0) - rc = 1; - return rc; -} - -/** - * Sets MTU of an interface. - * - * @param dev Pointer to interface struct. - * @param new_mtu The new MTU to use for this interface. - * - * @return 0 on success, -EINVAL if MTU is out of valid range. - * (valid range is 576 .. 65527). If VM is on the - * remote side, maximum MTU is 32760, however this is - * not checked here. - */ -static int -ctc_change_mtu(struct net_device * dev, int new_mtu) -{ - struct ctc_priv *privptr = (struct ctc_priv *) dev->priv; - - DBF_TEXT(trace, 3, __FUNCTION__); - if ((new_mtu < 576) || (new_mtu > 65527) || - (new_mtu > (privptr->channel[READ]->max_bufsize - - LL_HEADER_LENGTH - 2))) - return -EINVAL; - dev->mtu = new_mtu; - dev->hard_header_len = LL_HEADER_LENGTH + 2; - return 0; -} - -/** - * Returns interface statistics of a device. - * - * @param dev Pointer to interface struct. - * - * @return Pointer to stats struct of this interface. - */ -static struct net_device_stats * -ctc_stats(struct net_device * dev) -{ - return &((struct ctc_priv *) dev->priv)->stats; -} - -/* - * sysfs attributes - */ - -static ssize_t -buffer_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct ctc_priv *priv; - - priv = dev->driver_data; - if (!priv) - return -ENODEV; - return sprintf(buf, "%d\n", - priv->buffer_size); -} - -static ssize_t -buffer_write(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct ctc_priv *priv; - struct net_device *ndev; - int bs1; - char buffer[16]; - - DBF_TEXT(trace, 3, __FUNCTION__); - DBF_TEXT(trace, 3, buf); - priv = dev->driver_data; - if (!priv) { - DBF_TEXT(trace, 3, "bfnopriv"); - return -ENODEV; - } - - sscanf(buf, "%u", &bs1); - if (bs1 > CTC_BUFSIZE_LIMIT) - goto einval; - if (bs1 < (576 + LL_HEADER_LENGTH + 2)) - goto einval; - priv->buffer_size = bs1; // just to overwrite the default - - ndev = priv->channel[READ]->netdev; - if (!ndev) { - DBF_TEXT(trace, 3, "bfnondev"); - return -ENODEV; - } - - if ((ndev->flags & IFF_RUNNING) && - (bs1 < (ndev->mtu + LL_HEADER_LENGTH + 2))) - goto einval; - - priv->channel[READ]->max_bufsize = bs1; - priv->channel[WRITE]->max_bufsize = bs1; - if (!(ndev->flags & IFF_RUNNING)) - ndev->mtu = bs1 - LL_HEADER_LENGTH - 2; - priv->channel[READ]->flags |= CHANNEL_FLAGS_BUFSIZE_CHANGED; - priv->channel[WRITE]->flags |= CHANNEL_FLAGS_BUFSIZE_CHANGED; - - sprintf(buffer, "%d",priv->buffer_size); - DBF_TEXT(trace, 3, buffer); - return count; - -einval: - DBF_TEXT(trace, 3, "buff_err"); - return -EINVAL; -} - -static ssize_t -loglevel_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - return sprintf(buf, "%d\n", loglevel); -} - -static ssize_t -loglevel_write(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - int ll1; - - DBF_TEXT(trace, 5, __FUNCTION__); - sscanf(buf, "%i", &ll1); - - if ((ll1 > CTC_LOGLEVEL_MAX) || (ll1 < 0)) - return -EINVAL; - loglevel = ll1; - return count; -} - -static void -ctc_print_statistics(struct ctc_priv *priv) -{ - char *sbuf; - char *p; - - DBF_TEXT(trace, 4, __FUNCTION__); - if (!priv) - return; - sbuf = kmalloc(2048, GFP_KERNEL); - if (sbuf == NULL) - return; - p = sbuf; - - p += sprintf(p, " Device FSM state: %s\n", - fsm_getstate_str(priv->fsm)); - p += sprintf(p, " RX channel FSM state: %s\n", - fsm_getstate_str(priv->channel[READ]->fsm)); - p += sprintf(p, " TX channel FSM state: %s\n", - fsm_getstate_str(priv->channel[WRITE]->fsm)); - p += sprintf(p, " Max. TX buffer used: %ld\n", - priv->channel[WRITE]->prof.maxmulti); - p += sprintf(p, " Max. chained SKBs: %ld\n", - priv->channel[WRITE]->prof.maxcqueue); - p += sprintf(p, " TX single write ops: %ld\n", - priv->channel[WRITE]->prof.doios_single); - p += sprintf(p, " TX multi write ops: %ld\n", - priv->channel[WRITE]->prof.doios_multi); - p += sprintf(p, " Netto bytes written: %ld\n", - priv->channel[WRITE]->prof.txlen); - p += sprintf(p, " Max. TX IO-time: %ld\n", - priv->channel[WRITE]->prof.tx_time); - - ctc_pr_debug("Statistics for %s:\n%s", - priv->channel[WRITE]->netdev->name, sbuf); - kfree(sbuf); - return; -} - -static ssize_t -stats_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct ctc_priv *priv = dev->driver_data; - if (!priv) - return -ENODEV; - ctc_print_statistics(priv); - return sprintf(buf, "0\n"); -} - -static ssize_t -stats_write(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct ctc_priv *priv = dev->driver_data; - if (!priv) - return -ENODEV; - /* Reset statistics */ - memset(&priv->channel[WRITE]->prof, 0, - sizeof(priv->channel[WRITE]->prof)); - return count; -} - -static void -ctc_netdev_unregister(struct net_device * dev) -{ - struct ctc_priv *privptr; - - if (!dev) - return; - privptr = (struct ctc_priv *) dev->priv; - unregister_netdev(dev); -} - -static int -ctc_netdev_register(struct net_device * dev) -{ - return register_netdev(dev); -} - -static void -ctc_free_netdevice(struct net_device * dev, int free_dev) -{ - struct ctc_priv *privptr; - if (!dev) - return; - privptr = dev->priv; - if (privptr) { - if (privptr->fsm) - kfree_fsm(privptr->fsm); - kfree(privptr); - } -#ifdef MODULE - if (free_dev) - free_netdev(dev); -#endif -} - -static ssize_t -ctc_proto_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct ctc_priv *priv; - - priv = dev->driver_data; - if (!priv) - return -ENODEV; - - return sprintf(buf, "%d\n", priv->protocol); -} - -static ssize_t -ctc_proto_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct ctc_priv *priv; - int value; - - DBF_TEXT(trace, 3, __FUNCTION__); - pr_debug("%s() called\n", __FUNCTION__); - - priv = dev->driver_data; - if (!priv) - return -ENODEV; - sscanf(buf, "%u", &value); - if (!((value == CTC_PROTO_S390) || - (value == CTC_PROTO_LINUX) || - (value == CTC_PROTO_OS390))) - return -EINVAL; - priv->protocol = value; - - return count; -} - -static ssize_t -ctc_type_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct ccwgroup_device *cgdev; - - cgdev = to_ccwgroupdev(dev); - if (!cgdev) - return -ENODEV; - - return sprintf(buf, "%s\n", cu3088_type[cgdev->cdev[0]->id.driver_info]); -} - -static DEVICE_ATTR(buffer, 0644, buffer_show, buffer_write); -static DEVICE_ATTR(protocol, 0644, ctc_proto_show, ctc_proto_store); -static DEVICE_ATTR(type, 0444, ctc_type_show, NULL); - -static DEVICE_ATTR(loglevel, 0644, loglevel_show, loglevel_write); -static DEVICE_ATTR(stats, 0644, stats_show, stats_write); - -static struct attribute *ctc_attr[] = { - &dev_attr_protocol.attr, - &dev_attr_type.attr, - &dev_attr_buffer.attr, - NULL, -}; - -static struct attribute_group ctc_attr_group = { - .attrs = ctc_attr, -}; - -static int -ctc_add_attributes(struct device *dev) -{ - int rc; - - rc = device_create_file(dev, &dev_attr_loglevel); - if (rc) - goto out; - rc = device_create_file(dev, &dev_attr_stats); - if (!rc) - goto out; - device_remove_file(dev, &dev_attr_loglevel); -out: - return rc; -} - -static void -ctc_remove_attributes(struct device *dev) -{ - device_remove_file(dev, &dev_attr_stats); - device_remove_file(dev, &dev_attr_loglevel); -} - -static int -ctc_add_files(struct device *dev) -{ - pr_debug("%s() called\n", __FUNCTION__); - - return sysfs_create_group(&dev->kobj, &ctc_attr_group); -} - -static void -ctc_remove_files(struct device *dev) -{ - pr_debug("%s() called\n", __FUNCTION__); - - sysfs_remove_group(&dev->kobj, &ctc_attr_group); -} - -/** - * Add ctc specific attributes. - * Add ctc private data. - * - * @param cgdev pointer to ccwgroup_device just added - * - * @returns 0 on success, !0 on failure. - */ -static int -ctc_probe_device(struct ccwgroup_device *cgdev) -{ - struct ctc_priv *priv; - int rc; - char buffer[16]; - - pr_debug("%s() called\n", __FUNCTION__); - DBF_TEXT(setup, 3, __FUNCTION__); - - if (!get_device(&cgdev->dev)) - return -ENODEV; - - priv = kzalloc(sizeof(struct ctc_priv), GFP_KERNEL); - if (!priv) { - ctc_pr_err("%s: Out of memory\n", __func__); - put_device(&cgdev->dev); - return -ENOMEM; - } - - rc = ctc_add_files(&cgdev->dev); - if (rc) { - kfree(priv); - put_device(&cgdev->dev); - return rc; - } - priv->buffer_size = CTC_BUFSIZE_DEFAULT; - cgdev->cdev[0]->handler = ctc_irq_handler; - cgdev->cdev[1]->handler = ctc_irq_handler; - cgdev->dev.driver_data = priv; - - sprintf(buffer, "%p", priv); - DBF_TEXT(data, 3, buffer); - - sprintf(buffer, "%u", (unsigned int)sizeof(struct ctc_priv)); - DBF_TEXT(data, 3, buffer); - - sprintf(buffer, "%p", &channels); - DBF_TEXT(data, 3, buffer); - - sprintf(buffer, "%u", (unsigned int)sizeof(struct channel)); - DBF_TEXT(data, 3, buffer); - - return 0; -} - -/** - * Device setup function called by alloc_netdev(). - * - * @param dev Device to be setup. - */ -void ctc_init_netdevice(struct net_device * dev) -{ - DBF_TEXT(setup, 3, __FUNCTION__); - - if (dev->mtu == 0) - dev->mtu = CTC_BUFSIZE_DEFAULT - LL_HEADER_LENGTH - 2; - dev->hard_start_xmit = ctc_tx; - dev->open = ctc_open; - dev->stop = ctc_close; - dev->get_stats = ctc_stats; - dev->change_mtu = ctc_change_mtu; - dev->hard_header_len = LL_HEADER_LENGTH + 2; - dev->addr_len = 0; - dev->type = ARPHRD_SLIP; - dev->tx_queue_len = 100; - dev->flags = IFF_POINTOPOINT | IFF_NOARP; -} - - -/** - * - * Setup an interface. - * - * @param cgdev Device to be setup. - * - * @returns 0 on success, !0 on failure. - */ -static int -ctc_new_device(struct ccwgroup_device *cgdev) -{ - char read_id[CTC_ID_SIZE]; - char write_id[CTC_ID_SIZE]; - int direction; - enum channel_types type; - struct ctc_priv *privptr; - struct net_device *dev; - int ret; - char buffer[16]; - - pr_debug("%s() called\n", __FUNCTION__); - DBF_TEXT(setup, 3, __FUNCTION__); - - privptr = cgdev->dev.driver_data; - if (!privptr) - return -ENODEV; - - sprintf(buffer, "%d", privptr->buffer_size); - DBF_TEXT(setup, 3, buffer); - - type = get_channel_type(&cgdev->cdev[0]->id); - - snprintf(read_id, CTC_ID_SIZE, "ch-%s", cgdev->cdev[0]->dev.bus_id); - snprintf(write_id, CTC_ID_SIZE, "ch-%s", cgdev->cdev[1]->dev.bus_id); - - if (add_channel(cgdev->cdev[0], type)) - return -ENOMEM; - if (add_channel(cgdev->cdev[1], type)) - return -ENOMEM; - - ret = ccw_device_set_online(cgdev->cdev[0]); - if (ret != 0) { - printk(KERN_WARNING - "ccw_device_set_online (cdev[0]) failed with ret = %d\n", ret); - } - - ret = ccw_device_set_online(cgdev->cdev[1]); - if (ret != 0) { - printk(KERN_WARNING - "ccw_device_set_online (cdev[1]) failed with ret = %d\n", ret); - } - - dev = alloc_netdev(0, "ctc%d", ctc_init_netdevice); - if (!dev) { - ctc_pr_warn("ctc_init_netdevice failed\n"); - goto out; - } - dev->priv = privptr; - - privptr->fsm = init_fsm("ctcdev", dev_state_names, - dev_event_names, CTC_NR_DEV_STATES, CTC_NR_DEV_EVENTS, - dev_fsm, DEV_FSM_LEN, GFP_KERNEL); - if (privptr->fsm == NULL) { - free_netdev(dev); - goto out; - } - fsm_newstate(privptr->fsm, DEV_STATE_STOPPED); - fsm_settimer(privptr->fsm, &privptr->restart_timer); - - for (direction = READ; direction <= WRITE; direction++) { - privptr->channel[direction] = - channel_get(type, direction == READ ? read_id : write_id, - direction); - if (privptr->channel[direction] == NULL) { - if (direction == WRITE) - channel_free(privptr->channel[READ]); - - ctc_free_netdevice(dev, 1); - goto out; - } - privptr->channel[direction]->netdev = dev; - privptr->channel[direction]->protocol = privptr->protocol; - privptr->channel[direction]->max_bufsize = privptr->buffer_size; - } - /* sysfs magic */ - SET_NETDEV_DEV(dev, &cgdev->dev); - - if (ctc_netdev_register(dev) != 0) { - ctc_free_netdevice(dev, 1); - goto out; - } - - if (ctc_add_attributes(&cgdev->dev)) { - ctc_netdev_unregister(dev); - dev->priv = NULL; - ctc_free_netdevice(dev, 1); - goto out; - } - - strlcpy(privptr->fsm->name, dev->name, sizeof (privptr->fsm->name)); - - print_banner(); - - ctc_pr_info("%s: read: %s, write: %s, proto: %d\n", - dev->name, privptr->channel[READ]->id, - privptr->channel[WRITE]->id, privptr->protocol); - - return 0; -out: - ccw_device_set_offline(cgdev->cdev[1]); - ccw_device_set_offline(cgdev->cdev[0]); - - return -ENODEV; -} - -/** - * Shutdown an interface. - * - * @param cgdev Device to be shut down. - * - * @returns 0 on success, !0 on failure. - */ -static int -ctc_shutdown_device(struct ccwgroup_device *cgdev) -{ - struct ctc_priv *priv; - struct net_device *ndev; - - DBF_TEXT(setup, 3, __FUNCTION__); - pr_debug("%s() called\n", __FUNCTION__); - - - priv = cgdev->dev.driver_data; - ndev = NULL; - if (!priv) - return -ENODEV; - - if (priv->channel[READ]) { - ndev = priv->channel[READ]->netdev; - - /* Close the device */ - ctc_close(ndev); - ndev->flags &=~IFF_RUNNING; - - ctc_remove_attributes(&cgdev->dev); - - channel_free(priv->channel[READ]); - } - if (priv->channel[WRITE]) - channel_free(priv->channel[WRITE]); - - if (ndev) { - ctc_netdev_unregister(ndev); - ndev->priv = NULL; - ctc_free_netdevice(ndev, 1); - } - - if (priv->fsm) - kfree_fsm(priv->fsm); - - ccw_device_set_offline(cgdev->cdev[1]); - ccw_device_set_offline(cgdev->cdev[0]); - - if (priv->channel[READ]) - channel_remove(priv->channel[READ]); - if (priv->channel[WRITE]) - channel_remove(priv->channel[WRITE]); - priv->channel[READ] = priv->channel[WRITE] = NULL; - - return 0; - -} - -static void -ctc_remove_device(struct ccwgroup_device *cgdev) -{ - struct ctc_priv *priv; - - pr_debug("%s() called\n", __FUNCTION__); - DBF_TEXT(setup, 3, __FUNCTION__); - - priv = cgdev->dev.driver_data; - if (!priv) - return; - if (cgdev->state == CCWGROUP_ONLINE) - ctc_shutdown_device(cgdev); - ctc_remove_files(&cgdev->dev); - cgdev->dev.driver_data = NULL; - kfree(priv); - put_device(&cgdev->dev); -} - -static struct ccwgroup_driver ctc_group_driver = { - .owner = THIS_MODULE, - .name = "ctc", - .max_slaves = 2, - .driver_id = 0xC3E3C3, - .probe = ctc_probe_device, - .remove = ctc_remove_device, - .set_online = ctc_new_device, - .set_offline = ctc_shutdown_device, -}; - -/** - * Module related routines - *****************************************************************************/ - -/** - * Prepare to be unloaded. Free IRQ's and release all resources. - * This is called just before this module is unloaded. It is - * not called, if the usage count is !0, so we don't need to check - * for that. - */ -static void __exit -ctc_exit(void) -{ - DBF_TEXT(setup, 3, __FUNCTION__); - unregister_cu3088_discipline(&ctc_group_driver); - ctc_unregister_dbf_views(); - ctc_pr_info("CTC driver unloaded\n"); -} - -/** - * Initialize module. - * This is called just after the module is loaded. - * - * @return 0 on success, !0 on error. - */ -static int __init -ctc_init(void) -{ - int ret = 0; - - loglevel = CTC_LOGLEVEL_DEFAULT; - - DBF_TEXT(setup, 3, __FUNCTION__); - - print_banner(); - - ret = ctc_register_dbf_views(); - if (ret){ - ctc_pr_crit("ctc_init failed with ctc_register_dbf_views rc = %d\n", ret); - return ret; - } - ret = register_cu3088_discipline(&ctc_group_driver); - if (ret) { - ctc_unregister_dbf_views(); - } - return ret; -} - -module_init(ctc_init); -module_exit(ctc_exit); - -/* --- This is the END my friend --- */ diff --git a/drivers/s390/net/ctcmain.h b/drivers/s390/net/ctcmain.h deleted file mode 100644 index 7f305d119f3d..000000000000 --- a/drivers/s390/net/ctcmain.h +++ /dev/null @@ -1,270 +0,0 @@ -/* - * CTC / ESCON network driver - * - * Copyright (C) 2001 IBM Deutschland Entwicklung GmbH, IBM Corporation - * Author(s): Fritz Elfert (elfert@de.ibm.com, felfert@millenux.com) - Peter Tiedemann (ptiedem@de.ibm.com) - * - * - * Documentation used: - * - Principles of Operation (IBM doc#: SA22-7201-06) - * - Common IO/-Device Commands and Self Description (IBM doc#: SA22-7204-02) - * - Common IO/-Device Commands and Self Description (IBM doc#: SN22-5535) - * - ESCON Channel-to-Channel Adapter (IBM doc#: SA22-7203-00) - * - ESCON I/O Interface (IBM doc#: SA22-7202-029 - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2, or (at your option) - * any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - */ - -#ifndef _CTCMAIN_H_ -#define _CTCMAIN_H_ - -#include -#include - -#include -#include - -#include "fsm.h" -#include "cu3088.h" - - -/** - * CCW commands, used in this driver. - */ -#define CCW_CMD_WRITE 0x01 -#define CCW_CMD_READ 0x02 -#define CCW_CMD_SET_EXTENDED 0xc3 -#define CCW_CMD_PREPARE 0xe3 - -#define CTC_PROTO_S390 0 -#define CTC_PROTO_LINUX 1 -#define CTC_PROTO_OS390 3 - -#define CTC_BUFSIZE_LIMIT 65535 -#define CTC_BUFSIZE_DEFAULT 32768 - -#define CTC_TIMEOUT_5SEC 5000 - -#define CTC_INITIAL_BLOCKLEN 2 - -#define READ 0 -#define WRITE 1 - -#define CTC_ID_SIZE BUS_ID_SIZE+3 - - -struct ctc_profile { - unsigned long maxmulti; - unsigned long maxcqueue; - unsigned long doios_single; - unsigned long doios_multi; - unsigned long txlen; - unsigned long tx_time; - struct timespec send_stamp; -}; - -/** - * Definition of one channel - */ -struct channel { - - /** - * Pointer to next channel in list. - */ - struct channel *next; - char id[CTC_ID_SIZE]; - struct ccw_device *cdev; - - /** - * Type of this channel. - * CTC/A or Escon for valid channels. - */ - enum channel_types type; - - /** - * Misc. flags. See CHANNEL_FLAGS_... below - */ - __u32 flags; - - /** - * The protocol of this channel - */ - __u16 protocol; - - /** - * I/O and irq related stuff - */ - struct ccw1 *ccw; - struct irb *irb; - - /** - * RX/TX buffer size - */ - int max_bufsize; - - /** - * Transmit/Receive buffer. - */ - struct sk_buff *trans_skb; - - /** - * Universal I/O queue. - */ - struct sk_buff_head io_queue; - - /** - * TX queue for collecting skb's during busy. - */ - struct sk_buff_head collect_queue; - - /** - * Amount of data in collect_queue. - */ - int collect_len; - - /** - * spinlock for collect_queue and collect_len - */ - spinlock_t collect_lock; - - /** - * Timer for detecting unresposive - * I/O operations. - */ - fsm_timer timer; - - /** - * Retry counter for misc. operations. - */ - int retry; - - /** - * The finite state machine of this channel - */ - fsm_instance *fsm; - - /** - * The corresponding net_device this channel - * belongs to. - */ - struct net_device *netdev; - - struct ctc_profile prof; - - unsigned char *trans_skb_data; - - __u16 logflags; -}; - -#define CHANNEL_FLAGS_READ 0 -#define CHANNEL_FLAGS_WRITE 1 -#define CHANNEL_FLAGS_INUSE 2 -#define CHANNEL_FLAGS_BUFSIZE_CHANGED 4 -#define CHANNEL_FLAGS_FAILED 8 -#define CHANNEL_FLAGS_WAITIRQ 16 -#define CHANNEL_FLAGS_RWMASK 1 -#define CHANNEL_DIRECTION(f) (f & CHANNEL_FLAGS_RWMASK) - -#define LOG_FLAG_ILLEGALPKT 1 -#define LOG_FLAG_ILLEGALSIZE 2 -#define LOG_FLAG_OVERRUN 4 -#define LOG_FLAG_NOMEM 8 - -#define CTC_LOGLEVEL_INFO 1 -#define CTC_LOGLEVEL_NOTICE 2 -#define CTC_LOGLEVEL_WARN 4 -#define CTC_LOGLEVEL_EMERG 8 -#define CTC_LOGLEVEL_ERR 16 -#define CTC_LOGLEVEL_DEBUG 32 -#define CTC_LOGLEVEL_CRIT 64 - -#define CTC_LOGLEVEL_DEFAULT \ -(CTC_LOGLEVEL_INFO | CTC_LOGLEVEL_NOTICE | CTC_LOGLEVEL_WARN | CTC_LOGLEVEL_CRIT) - -#define CTC_LOGLEVEL_MAX ((CTC_LOGLEVEL_CRIT<<1)-1) - -#define ctc_pr_debug(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_DEBUG) printk(KERN_DEBUG fmt,##arg); } while (0) - -#define ctc_pr_info(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_INFO) printk(KERN_INFO fmt,##arg); } while (0) - -#define ctc_pr_notice(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_NOTICE) printk(KERN_NOTICE fmt,##arg); } while (0) - -#define ctc_pr_warn(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_WARN) printk(KERN_WARNING fmt,##arg); } while (0) - -#define ctc_pr_emerg(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_EMERG) printk(KERN_EMERG fmt,##arg); } while (0) - -#define ctc_pr_err(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_ERR) printk(KERN_ERR fmt,##arg); } while (0) - -#define ctc_pr_crit(fmt, arg...) \ -do { if (loglevel & CTC_LOGLEVEL_CRIT) printk(KERN_CRIT fmt,##arg); } while (0) - -struct ctc_priv { - struct net_device_stats stats; - unsigned long tbusy; - /** - * The finite state machine of this interface. - */ - fsm_instance *fsm; - /** - * The protocol of this device - */ - __u16 protocol; - /** - * Timer for restarting after I/O Errors - */ - fsm_timer restart_timer; - - int buffer_size; - - struct channel *channel[2]; -}; - -/** - * Definition of our link level header. - */ -struct ll_header { - __u16 length; - __u16 type; - __u16 unused; -}; -#define LL_HEADER_LENGTH (sizeof(struct ll_header)) - -/** - * Compatibility macros for busy handling - * of network devices. - */ -static __inline__ void -ctc_clear_busy(struct net_device * dev) -{ - clear_bit(0, &(((struct ctc_priv *) dev->priv)->tbusy)); - netif_wake_queue(dev); -} - -static __inline__ int -ctc_test_and_set_busy(struct net_device * dev) -{ - netif_stop_queue(dev); - return test_and_set_bit(0, &((struct ctc_priv *) dev->priv)->tbusy); -} - -#endif -- cgit v1.2.3-59-g8ed1b From 4a71df50047f0db65ea09b1be155852e81a45eba Mon Sep 17 00:00:00 2001 From: Frank Blaschka Date: Fri, 15 Feb 2008 09:19:42 +0100 Subject: qeth: new qeth device driver List of major changes and improvements: no manipulation of the global ARP constructor clean code split into core, layer 2 and layer 3 functionality better exploitation of the ethtool interface better representation of the various hardware capabilities fix packet socket support (tcpdump), no fake_ll required osasnmpd notification via udev events coding style and beautification Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- arch/s390/defconfig | 8 +- drivers/s390/net/Kconfig | 31 +- drivers/s390/net/Makefile | 7 +- drivers/s390/net/qeth_core.h | 916 ++++++++ drivers/s390/net/qeth_core_main.c | 4540 +++++++++++++++++++++++++++++++++++++ drivers/s390/net/qeth_core_mpc.c | 266 +++ drivers/s390/net/qeth_core_mpc.h | 566 +++++ drivers/s390/net/qeth_core_offl.c | 701 ++++++ drivers/s390/net/qeth_core_offl.h | 76 + drivers/s390/net/qeth_core_sys.c | 651 ++++++ drivers/s390/net/qeth_l2_main.c | 1242 ++++++++++ drivers/s390/net/qeth_l3.h | 76 + drivers/s390/net/qeth_l3_main.c | 3388 +++++++++++++++++++++++++++ drivers/s390/net/qeth_l3_sys.c | 1051 +++++++++ 14 files changed, 13498 insertions(+), 21 deletions(-) create mode 100644 drivers/s390/net/qeth_core.h create mode 100644 drivers/s390/net/qeth_core_main.c create mode 100644 drivers/s390/net/qeth_core_mpc.c create mode 100644 drivers/s390/net/qeth_core_mpc.h create mode 100644 drivers/s390/net/qeth_core_offl.c create mode 100644 drivers/s390/net/qeth_core_offl.h create mode 100644 drivers/s390/net/qeth_core_sys.c create mode 100644 drivers/s390/net/qeth_l2_main.c create mode 100644 drivers/s390/net/qeth_l3.h create mode 100644 drivers/s390/net/qeth_l3_main.c create mode 100644 drivers/s390/net/qeth_l3_sys.c (limited to 'drivers/s390') diff --git a/arch/s390/defconfig b/arch/s390/defconfig index 62f6b5a606dd..cb93bf20bd75 100644 --- a/arch/s390/defconfig +++ b/arch/s390/defconfig @@ -537,11 +537,9 @@ CONFIG_CTC=m # CONFIG_SMSGIUCV is not set # CONFIG_CLAW is not set CONFIG_QETH=y - -# -# Gigabit Ethernet default settings -# -# CONFIG_QETH_IPV6 is not set +CONFIG_QETH_L2=y +CONFIG_QETH_L3=y +CONFIG_QETH_IPV6=y CONFIG_CCWGROUP=y # CONFIG_PPP is not set # CONFIG_SLIP is not set diff --git a/drivers/s390/net/Kconfig b/drivers/s390/net/Kconfig index 773f5a6d5822..a7745c82b4ae 100644 --- a/drivers/s390/net/Kconfig +++ b/drivers/s390/net/Kconfig @@ -67,23 +67,26 @@ config QETH To compile this driver as a module, choose M. The module name is qeth.ko. +config QETH_L2 + tristate "qeth layer 2 device support" + depends on QETH + help + Select this option to be able to run qeth devices in layer 2 mode. + To compile as a module, choose M. The module name is qeth_l2.ko. + If unsure, choose y. -comment "Gigabit Ethernet default settings" - depends on QETH +config QETH_L3 + tristate "qeth layer 3 device support" + depends on QETH + help + Select this option to be able to run qeth devices in layer 3 mode. + To compile as a module choose M. The module name is qeth_l3.ko. + If unsure, choose Y. config QETH_IPV6 - bool "IPv6 support for gigabit ethernet" - depends on (QETH = IPV6) || (QETH && IPV6 = 'y') - help - If CONFIG_QETH is switched on, this option will include IPv6 - support in the qeth device driver. - -config QETH_VLAN - bool "VLAN support for gigabit ethernet" - depends on (QETH = VLAN_8021Q) || (QETH && VLAN_8021Q = 'y') - help - If CONFIG_QETH is switched on, this option will include IEEE - 802.1q VLAN support in the qeth device driver. + bool + depends on (QETH_L3 = IPV6) || (QETH_L3 && IPV6 = 'y') + default y config CCWGROUP tristate diff --git a/drivers/s390/net/Makefile b/drivers/s390/net/Makefile index f6d189a8a451..6382c04d2bdf 100644 --- a/drivers/s390/net/Makefile +++ b/drivers/s390/net/Makefile @@ -8,6 +8,9 @@ obj-$(CONFIG_NETIUCV) += netiucv.o fsm.o obj-$(CONFIG_SMSGIUCV) += smsgiucv.o obj-$(CONFIG_LCS) += lcs.o cu3088.o obj-$(CONFIG_CLAW) += claw.o cu3088.o -qeth-y := qeth_main.o qeth_mpc.o qeth_sys.o qeth_eddp.o -qeth-$(CONFIG_PROC_FS) += qeth_proc.o +qeth-y += qeth_core_sys.o qeth_core_main.o qeth_core_mpc.o qeth_core_offl.o obj-$(CONFIG_QETH) += qeth.o +qeth_l2-y += qeth_l2_main.o +obj-$(CONFIG_QETH_L2) += qeth_l2.o +qeth_l3-y += qeth_l3_main.o qeth_l3_sys.o +obj-$(CONFIG_QETH_L3) += qeth_l3.o diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h new file mode 100644 index 000000000000..9485e363ca11 --- /dev/null +++ b/drivers/s390/net/qeth_core.h @@ -0,0 +1,916 @@ +/* + * drivers/s390/net/qeth_core.h + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#ifndef __QETH_CORE_H__ +#define __QETH_CORE_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include + +#include "qeth_core_mpc.h" + +/** + * Debug Facility stuff + */ +#define QETH_DBF_SETUP_NAME "qeth_setup" +#define QETH_DBF_SETUP_LEN 8 +#define QETH_DBF_SETUP_PAGES 8 +#define QETH_DBF_SETUP_NR_AREAS 1 +#define QETH_DBF_SETUP_LEVEL 5 + +#define QETH_DBF_MISC_NAME "qeth_misc" +#define QETH_DBF_MISC_LEN 128 +#define QETH_DBF_MISC_PAGES 2 +#define QETH_DBF_MISC_NR_AREAS 1 +#define QETH_DBF_MISC_LEVEL 2 + +#define QETH_DBF_DATA_NAME "qeth_data" +#define QETH_DBF_DATA_LEN 96 +#define QETH_DBF_DATA_PAGES 8 +#define QETH_DBF_DATA_NR_AREAS 1 +#define QETH_DBF_DATA_LEVEL 2 + +#define QETH_DBF_CONTROL_NAME "qeth_control" +#define QETH_DBF_CONTROL_LEN 256 +#define QETH_DBF_CONTROL_PAGES 8 +#define QETH_DBF_CONTROL_NR_AREAS 1 +#define QETH_DBF_CONTROL_LEVEL 5 + +#define QETH_DBF_TRACE_NAME "qeth_trace" +#define QETH_DBF_TRACE_LEN 8 +#define QETH_DBF_TRACE_PAGES 4 +#define QETH_DBF_TRACE_NR_AREAS 1 +#define QETH_DBF_TRACE_LEVEL 3 + +#define QETH_DBF_SENSE_NAME "qeth_sense" +#define QETH_DBF_SENSE_LEN 64 +#define QETH_DBF_SENSE_PAGES 2 +#define QETH_DBF_SENSE_NR_AREAS 1 +#define QETH_DBF_SENSE_LEVEL 2 + +#define QETH_DBF_QERR_NAME "qeth_qerr" +#define QETH_DBF_QERR_LEN 8 +#define QETH_DBF_QERR_PAGES 2 +#define QETH_DBF_QERR_NR_AREAS 1 +#define QETH_DBF_QERR_LEVEL 2 + +#define QETH_DBF_TEXT(name, level, text) \ + do { \ + debug_text_event(qeth_dbf_##name, level, text); \ + } while (0) + +#define QETH_DBF_HEX(name, level, addr, len) \ + do { \ + debug_event(qeth_dbf_##name, level, (void *)(addr), len); \ + } while (0) + +/* Allow to sort out low debug levels early to avoid wasted sprints */ +static inline int qeth_dbf_passes(debug_info_t *dbf_grp, int level) +{ + return (level <= dbf_grp->level); +} + +/** + * some more debug stuff + */ +#define PRINTK_HEADER "qeth: " + +#define SENSE_COMMAND_REJECT_BYTE 0 +#define SENSE_COMMAND_REJECT_FLAG 0x80 +#define SENSE_RESETTING_EVENT_BYTE 1 +#define SENSE_RESETTING_EVENT_FLAG 0x80 + +/* + * Common IO related definitions + */ +#define CARD_RDEV(card) card->read.ccwdev +#define CARD_WDEV(card) card->write.ccwdev +#define CARD_DDEV(card) card->data.ccwdev +#define CARD_BUS_ID(card) card->gdev->dev.bus_id +#define CARD_RDEV_ID(card) card->read.ccwdev->dev.bus_id +#define CARD_WDEV_ID(card) card->write.ccwdev->dev.bus_id +#define CARD_DDEV_ID(card) card->data.ccwdev->dev.bus_id +#define CHANNEL_ID(channel) channel->ccwdev->dev.bus_id + +/** + * card stuff + */ +struct qeth_perf_stats { + unsigned int bufs_rec; + unsigned int bufs_sent; + + unsigned int skbs_sent_pack; + unsigned int bufs_sent_pack; + + unsigned int sc_dp_p; + unsigned int sc_p_dp; + /* qdio_input_handler: number of times called, time spent in */ + __u64 inbound_start_time; + unsigned int inbound_cnt; + unsigned int inbound_time; + /* qeth_send_packet: number of times called, time spent in */ + __u64 outbound_start_time; + unsigned int outbound_cnt; + unsigned int outbound_time; + /* qdio_output_handler: number of times called, time spent in */ + __u64 outbound_handler_start_time; + unsigned int outbound_handler_cnt; + unsigned int outbound_handler_time; + /* number of calls to and time spent in do_QDIO for inbound queue */ + __u64 inbound_do_qdio_start_time; + unsigned int inbound_do_qdio_cnt; + unsigned int inbound_do_qdio_time; + /* number of calls to and time spent in do_QDIO for outbound queues */ + __u64 outbound_do_qdio_start_time; + unsigned int outbound_do_qdio_cnt; + unsigned int outbound_do_qdio_time; + /* eddp data */ + unsigned int large_send_bytes; + unsigned int large_send_cnt; + unsigned int sg_skbs_sent; + unsigned int sg_frags_sent; + /* initial values when measuring starts */ + unsigned long initial_rx_packets; + unsigned long initial_tx_packets; + /* inbound scatter gather data */ + unsigned int sg_skbs_rx; + unsigned int sg_frags_rx; + unsigned int sg_alloc_page_rx; +}; + +/* Routing stuff */ +struct qeth_routing_info { + enum qeth_routing_types type; +}; + +/* IPA stuff */ +struct qeth_ipa_info { + __u32 supported_funcs; + __u32 enabled_funcs; +}; + +static inline int qeth_is_ipa_supported(struct qeth_ipa_info *ipa, + enum qeth_ipa_funcs func) +{ + return (ipa->supported_funcs & func); +} + +static inline int qeth_is_ipa_enabled(struct qeth_ipa_info *ipa, + enum qeth_ipa_funcs func) +{ + return (ipa->supported_funcs & ipa->enabled_funcs & func); +} + +#define qeth_adp_supported(c, f) \ + qeth_is_ipa_supported(&c->options.adp, f) +#define qeth_adp_enabled(c, f) \ + qeth_is_ipa_enabled(&c->options.adp, f) +#define qeth_is_supported(c, f) \ + qeth_is_ipa_supported(&c->options.ipa4, f) +#define qeth_is_enabled(c, f) \ + qeth_is_ipa_enabled(&c->options.ipa4, f) +#define qeth_is_supported6(c, f) \ + qeth_is_ipa_supported(&c->options.ipa6, f) +#define qeth_is_enabled6(c, f) \ + qeth_is_ipa_enabled(&c->options.ipa6, f) +#define qeth_is_ipafunc_supported(c, prot, f) \ + ((prot == QETH_PROT_IPV6) ? \ + qeth_is_supported6(c, f) : qeth_is_supported(c, f)) +#define qeth_is_ipafunc_enabled(c, prot, f) \ + ((prot == QETH_PROT_IPV6) ? \ + qeth_is_enabled6(c, f) : qeth_is_enabled(c, f)) + +#define QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT 0x0101 +#define QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT 0x0101 +#define QETH_IDX_FUNC_LEVEL_IQD_ENA_IPAT 0x4108 +#define QETH_IDX_FUNC_LEVEL_IQD_DIS_IPAT 0x5108 + +#define QETH_MODELLIST_ARRAY \ + {{0x1731, 0x01, 0x1732, 0x01, QETH_CARD_TYPE_OSAE, 1, \ + QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT, \ + QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT, \ + QETH_MAX_QUEUES, 0}, \ + {0x1731, 0x05, 0x1732, 0x05, QETH_CARD_TYPE_IQD, 0, \ + QETH_IDX_FUNC_LEVEL_IQD_ENA_IPAT, \ + QETH_IDX_FUNC_LEVEL_IQD_DIS_IPAT, \ + QETH_MAX_QUEUES, 0x103}, \ + {0x1731, 0x06, 0x1732, 0x06, QETH_CARD_TYPE_OSN, 0, \ + QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT, \ + QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT, \ + QETH_MAX_QUEUES, 0}, \ + {0, 0, 0, 0, 0, 0, 0, 0, 0} } + +#define QETH_REAL_CARD 1 +#define QETH_VLAN_CARD 2 +#define QETH_BUFSIZE 4096 + +/** + * some more defs + */ +#define QETH_TX_TIMEOUT 100 * HZ +#define QETH_RCD_TIMEOUT 60 * HZ +#define QETH_HEADER_SIZE 32 +#define QETH_MAX_PORTNO 15 + +/*IPv6 address autoconfiguration stuff*/ +#define UNIQUE_ID_IF_CREATE_ADDR_FAILED 0xfffe +#define UNIQUE_ID_NOT_BY_CARD 0x10000 + +/*****************************************************************************/ +/* QDIO queue and buffer handling */ +/*****************************************************************************/ +#define QETH_MAX_QUEUES 4 +#define QETH_IN_BUF_SIZE_DEFAULT 65536 +#define QETH_IN_BUF_COUNT_DEFAULT 16 +#define QETH_IN_BUF_COUNT_MIN 8 +#define QETH_IN_BUF_COUNT_MAX 128 +#define QETH_MAX_BUFFER_ELEMENTS(card) ((card)->qdio.in_buf_size >> 12) +#define QETH_IN_BUF_REQUEUE_THRESHOLD(card) \ + ((card)->qdio.in_buf_pool.buf_count / 2) + +/* buffers we have to be behind before we get a PCI */ +#define QETH_PCI_THRESHOLD_A(card) ((card)->qdio.in_buf_pool.buf_count+1) +/*enqueued free buffers left before we get a PCI*/ +#define QETH_PCI_THRESHOLD_B(card) 0 +/*not used unless the microcode gets patched*/ +#define QETH_PCI_TIMER_VALUE(card) 3 + +#define QETH_MIN_INPUT_THRESHOLD 1 +#define QETH_MAX_INPUT_THRESHOLD 500 +#define QETH_MIN_OUTPUT_THRESHOLD 1 +#define QETH_MAX_OUTPUT_THRESHOLD 300 + +/* priority queing */ +#define QETH_PRIOQ_DEFAULT QETH_NO_PRIO_QUEUEING +#define QETH_DEFAULT_QUEUE 2 +#define QETH_NO_PRIO_QUEUEING 0 +#define QETH_PRIO_Q_ING_PREC 1 +#define QETH_PRIO_Q_ING_TOS 2 +#define IP_TOS_LOWDELAY 0x10 +#define IP_TOS_HIGHTHROUGHPUT 0x08 +#define IP_TOS_HIGHRELIABILITY 0x04 +#define IP_TOS_NOTIMPORTANT 0x02 + +/* Packing */ +#define QETH_LOW_WATERMARK_PACK 2 +#define QETH_HIGH_WATERMARK_PACK 5 +#define QETH_WATERMARK_PACK_FUZZ 1 + +#define QETH_IP_HEADER_SIZE 40 + +/* large receive scatter gather copy break */ +#define QETH_RX_SG_CB (PAGE_SIZE >> 1) + +struct qeth_hdr_layer3 { + __u8 id; + __u8 flags; + __u16 inbound_checksum; /*TSO:__u16 seqno */ + __u32 token; /*TSO: __u32 reserved */ + __u16 length; + __u8 vlan_prio; + __u8 ext_flags; + __u16 vlan_id; + __u16 frame_offset; + __u8 dest_addr[16]; +} __attribute__ ((packed)); + +struct qeth_hdr_layer2 { + __u8 id; + __u8 flags[3]; + __u8 port_no; + __u8 hdr_length; + __u16 pkt_length; + __u16 seq_no; + __u16 vlan_id; + __u32 reserved; + __u8 reserved2[16]; +} __attribute__ ((packed)); + +struct qeth_hdr_osn { + __u8 id; + __u8 reserved; + __u16 seq_no; + __u16 reserved2; + __u16 control_flags; + __u16 pdu_length; + __u8 reserved3[18]; + __u32 ccid; +} __attribute__ ((packed)); + +struct qeth_hdr { + union { + struct qeth_hdr_layer2 l2; + struct qeth_hdr_layer3 l3; + struct qeth_hdr_osn osn; + } hdr; +} __attribute__ ((packed)); + +/*TCP Segmentation Offload header*/ +struct qeth_hdr_ext_tso { + __u16 hdr_tot_len; + __u8 imb_hdr_no; + __u8 reserved; + __u8 hdr_type; + __u8 hdr_version; + __u16 hdr_len; + __u32 payload_len; + __u16 mss; + __u16 dg_hdr_len; + __u8 padding[16]; +} __attribute__ ((packed)); + +struct qeth_hdr_tso { + struct qeth_hdr hdr; /*hdr->hdr.l3.xxx*/ + struct qeth_hdr_ext_tso ext; +} __attribute__ ((packed)); + + +/* flags for qeth_hdr.flags */ +#define QETH_HDR_PASSTHRU 0x10 +#define QETH_HDR_IPV6 0x80 +#define QETH_HDR_CAST_MASK 0x07 +enum qeth_cast_flags { + QETH_CAST_UNICAST = 0x06, + QETH_CAST_MULTICAST = 0x04, + QETH_CAST_BROADCAST = 0x05, + QETH_CAST_ANYCAST = 0x07, + QETH_CAST_NOCAST = 0x00, +}; + +enum qeth_layer2_frame_flags { + QETH_LAYER2_FLAG_MULTICAST = 0x01, + QETH_LAYER2_FLAG_BROADCAST = 0x02, + QETH_LAYER2_FLAG_UNICAST = 0x04, + QETH_LAYER2_FLAG_VLAN = 0x10, +}; + +enum qeth_header_ids { + QETH_HEADER_TYPE_LAYER3 = 0x01, + QETH_HEADER_TYPE_LAYER2 = 0x02, + QETH_HEADER_TYPE_TSO = 0x03, + QETH_HEADER_TYPE_OSN = 0x04, +}; +/* flags for qeth_hdr.ext_flags */ +#define QETH_HDR_EXT_VLAN_FRAME 0x01 +#define QETH_HDR_EXT_TOKEN_ID 0x02 +#define QETH_HDR_EXT_INCLUDE_VLAN_TAG 0x04 +#define QETH_HDR_EXT_SRC_MAC_ADDR 0x08 +#define QETH_HDR_EXT_CSUM_HDR_REQ 0x10 +#define QETH_HDR_EXT_CSUM_TRANSP_REQ 0x20 +#define QETH_HDR_EXT_UDP_TSO 0x40 /*bit off for TCP*/ + +static inline int qeth_is_last_sbale(struct qdio_buffer_element *sbale) +{ + return (sbale->flags & SBAL_FLAGS_LAST_ENTRY); +} + +enum qeth_qdio_buffer_states { + /* + * inbound: read out by driver; owned by hardware in order to be filled + * outbound: owned by driver in order to be filled + */ + QETH_QDIO_BUF_EMPTY, + /* + * inbound: filled by hardware; owned by driver in order to be read out + * outbound: filled by driver; owned by hardware in order to be sent + */ + QETH_QDIO_BUF_PRIMED, +}; + +enum qeth_qdio_info_states { + QETH_QDIO_UNINITIALIZED, + QETH_QDIO_ALLOCATED, + QETH_QDIO_ESTABLISHED, + QETH_QDIO_CLEANING +}; + +struct qeth_buffer_pool_entry { + struct list_head list; + struct list_head init_list; + void *elements[QDIO_MAX_ELEMENTS_PER_BUFFER]; +}; + +struct qeth_qdio_buffer_pool { + struct list_head entry_list; + int buf_count; +}; + +struct qeth_qdio_buffer { + struct qdio_buffer *buffer; + /* the buffer pool entry currently associated to this buffer */ + struct qeth_buffer_pool_entry *pool_entry; +}; + +struct qeth_qdio_q { + struct qdio_buffer qdio_bufs[QDIO_MAX_BUFFERS_PER_Q]; + struct qeth_qdio_buffer bufs[QDIO_MAX_BUFFERS_PER_Q]; + int next_buf_to_init; +} __attribute__ ((aligned(256))); + +/* possible types of qeth large_send support */ +enum qeth_large_send_types { + QETH_LARGE_SEND_NO, + QETH_LARGE_SEND_EDDP, + QETH_LARGE_SEND_TSO, +}; + +struct qeth_qdio_out_buffer { + struct qdio_buffer *buffer; + atomic_t state; + int next_element_to_fill; + struct sk_buff_head skb_list; + struct list_head ctx_list; +}; + +struct qeth_card; + +enum qeth_out_q_states { + QETH_OUT_Q_UNLOCKED, + QETH_OUT_Q_LOCKED, + QETH_OUT_Q_LOCKED_FLUSH, +}; + +struct qeth_qdio_out_q { + struct qdio_buffer qdio_bufs[QDIO_MAX_BUFFERS_PER_Q]; + struct qeth_qdio_out_buffer bufs[QDIO_MAX_BUFFERS_PER_Q]; + int queue_no; + struct qeth_card *card; + atomic_t state; + int do_pack; + /* + * index of buffer to be filled by driver; state EMPTY or PACKING + */ + int next_buf_to_fill; + /* + * number of buffers that are currently filled (PRIMED) + * -> these buffers are hardware-owned + */ + atomic_t used_buffers; + /* indicates whether PCI flag must be set (or if one is outstanding) */ + atomic_t set_pci_flags_count; +} __attribute__ ((aligned(256))); + +struct qeth_qdio_info { + atomic_t state; + /* input */ + struct qeth_qdio_q *in_q; + struct qeth_qdio_buffer_pool in_buf_pool; + struct qeth_qdio_buffer_pool init_pool; + int in_buf_size; + + /* output */ + int no_out_queues; + struct qeth_qdio_out_q **out_qs; + + /* priority queueing */ + int do_prio_queueing; + int default_out_queue; +}; + +enum qeth_send_errors { + QETH_SEND_ERROR_NONE, + QETH_SEND_ERROR_LINK_FAILURE, + QETH_SEND_ERROR_RETRY, + QETH_SEND_ERROR_KICK_IT, +}; + +#define QETH_ETH_MAC_V4 0x0100 /* like v4 */ +#define QETH_ETH_MAC_V6 0x3333 /* like v6 */ +/* tr mc mac is longer, but that will be enough to detect mc frames */ +#define QETH_TR_MAC_NC 0xc000 /* non-canonical */ +#define QETH_TR_MAC_C 0x0300 /* canonical */ + +#define DEFAULT_ADD_HHLEN 0 +#define MAX_ADD_HHLEN 1024 + +/** + * buffer stuff for read channel + */ +#define QETH_CMD_BUFFER_NO 8 + +/** + * channel state machine + */ +enum qeth_channel_states { + CH_STATE_UP, + CH_STATE_DOWN, + CH_STATE_ACTIVATING, + CH_STATE_HALTED, + CH_STATE_STOPPED, + CH_STATE_RCD, + CH_STATE_RCD_DONE, +}; +/** + * card state machine + */ +enum qeth_card_states { + CARD_STATE_DOWN, + CARD_STATE_HARDSETUP, + CARD_STATE_SOFTSETUP, + CARD_STATE_UP, + CARD_STATE_RECOVER, +}; + +/** + * Protocol versions + */ +enum qeth_prot_versions { + QETH_PROT_IPV4 = 0x0004, + QETH_PROT_IPV6 = 0x0006, +}; + +enum qeth_ip_types { + QETH_IP_TYPE_NORMAL, + QETH_IP_TYPE_VIPA, + QETH_IP_TYPE_RXIP, + QETH_IP_TYPE_DEL_ALL_MC, +}; + +enum qeth_cmd_buffer_state { + BUF_STATE_FREE, + BUF_STATE_LOCKED, + BUF_STATE_PROCESSED, +}; + +struct qeth_ipato { + int enabled; + int invert4; + int invert6; + struct list_head entries; +}; + +struct qeth_channel; + +struct qeth_cmd_buffer { + enum qeth_cmd_buffer_state state; + struct qeth_channel *channel; + unsigned char *data; + int rc; + void (*callback) (struct qeth_channel *, struct qeth_cmd_buffer *); +}; + +/** + * definition of a qeth channel, used for read and write + */ +struct qeth_channel { + enum qeth_channel_states state; + struct ccw1 ccw; + spinlock_t iob_lock; + wait_queue_head_t wait_q; + struct tasklet_struct irq_tasklet; + struct ccw_device *ccwdev; +/*command buffer for control data*/ + struct qeth_cmd_buffer iob[QETH_CMD_BUFFER_NO]; + atomic_t irq_pending; + int io_buf_no; + int buf_no; +}; + +/** + * OSA card related definitions + */ +struct qeth_token { + __u32 issuer_rm_w; + __u32 issuer_rm_r; + __u32 cm_filter_w; + __u32 cm_filter_r; + __u32 cm_connection_w; + __u32 cm_connection_r; + __u32 ulp_filter_w; + __u32 ulp_filter_r; + __u32 ulp_connection_w; + __u32 ulp_connection_r; +}; + +struct qeth_seqno { + __u32 trans_hdr; + __u32 pdu_hdr; + __u32 pdu_hdr_ack; + __u16 ipa; + __u32 pkt_seqno; +}; + +struct qeth_reply { + struct list_head list; + wait_queue_head_t wait_q; + int (*callback)(struct qeth_card *, struct qeth_reply *, + unsigned long); + u32 seqno; + unsigned long offset; + atomic_t received; + int rc; + void *param; + struct qeth_card *card; + atomic_t refcnt; +}; + + +struct qeth_card_blkt { + int time_total; + int inter_packet; + int inter_packet_jumbo; +}; + +#define QETH_BROADCAST_WITH_ECHO 0x01 +#define QETH_BROADCAST_WITHOUT_ECHO 0x02 +#define QETH_LAYER2_MAC_READ 0x01 +#define QETH_LAYER2_MAC_REGISTERED 0x02 +struct qeth_card_info { + unsigned short unit_addr2; + unsigned short cula; + unsigned short chpid; + __u16 func_level; + char mcl_level[QETH_MCL_LENGTH + 1]; + int guestlan; + int mac_bits; + int portname_required; + int portno; + char portname[9]; + enum qeth_card_types type; + enum qeth_link_types link_type; + int is_multicast_different; + int initial_mtu; + int max_mtu; + int broadcast_capable; + int unique_id; + struct qeth_card_blkt blkt; + __u32 csum_mask; + enum qeth_ipa_promisc_modes promisc_mode; +}; + +struct qeth_card_options { + struct qeth_routing_info route4; + struct qeth_ipa_info ipa4; + struct qeth_ipa_info adp; /*Adapter parameters*/ + struct qeth_routing_info route6; + struct qeth_ipa_info ipa6; + enum qeth_checksum_types checksum_type; + int broadcast_mode; + int macaddr_mode; + int fake_broadcast; + int add_hhlen; + int fake_ll; + int layer2; + enum qeth_large_send_types large_send; + int performance_stats; + int rx_sg_cb; +}; + +/* + * thread bits for qeth_card thread masks + */ +enum qeth_threads { + QETH_RECOVER_THREAD = 1, +}; + +struct qeth_osn_info { + int (*assist_cb)(struct net_device *dev, void *data); + int (*data_cb)(struct sk_buff *skb); +}; + +enum qeth_discipline_id { + QETH_DISCIPLINE_LAYER3 = 0, + QETH_DISCIPLINE_LAYER2 = 1, +}; + +struct qeth_discipline { + qdio_handler_t *input_handler; + qdio_handler_t *output_handler; + int (*recover)(void *ptr); + struct ccwgroup_driver *ccwgdriver; +}; + +struct qeth_vlan_vid { + struct list_head list; + unsigned short vid; +}; + +struct qeth_mc_mac { + struct list_head list; + __u8 mc_addr[MAX_ADDR_LEN]; + unsigned char mc_addrlen; +}; + +struct qeth_card { + struct list_head list; + enum qeth_card_states state; + int lan_online; + spinlock_t lock; + struct ccwgroup_device *gdev; + struct qeth_channel read; + struct qeth_channel write; + struct qeth_channel data; + + struct net_device *dev; + struct net_device_stats stats; + + struct qeth_card_info info; + struct qeth_token token; + struct qeth_seqno seqno; + struct qeth_card_options options; + + wait_queue_head_t wait_q; + spinlock_t vlanlock; + spinlock_t mclock; + struct vlan_group *vlangrp; + struct list_head vid_list; + struct list_head mc_list; + struct work_struct kernel_thread_starter; + spinlock_t thread_mask_lock; + unsigned long thread_start_mask; + unsigned long thread_allowed_mask; + unsigned long thread_running_mask; + spinlock_t ip_lock; + struct list_head ip_list; + struct list_head *ip_tbd_list; + struct qeth_ipato ipato; + struct list_head cmd_waiter_list; + /* QDIO buffer handling */ + struct qeth_qdio_info qdio; + struct qeth_perf_stats perf_stats; + int use_hard_stop; + struct qeth_osn_info osn_info; + struct qeth_discipline discipline; + atomic_t force_alloc_skb; +}; + +struct qeth_card_list_struct { + struct list_head list; + rwlock_t rwlock; +}; + +/*some helper functions*/ +#define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "") + +static inline struct qeth_card *CARD_FROM_CDEV(struct ccw_device *cdev) +{ + struct qeth_card *card = dev_get_drvdata(&((struct ccwgroup_device *) + dev_get_drvdata(&cdev->dev))->dev); + return card; +} + +static inline int qeth_get_micros(void) +{ + return (int) (get_clock() >> 12); +} + +static inline void *qeth_push_skb(struct qeth_card *card, struct sk_buff *skb, + int size) +{ + void *hdr; + + hdr = (void *) skb_push(skb, size); + /* + * sanity check, the Linux memory allocation scheme should + * never present us cases like this one (the qdio header size plus + * the first 40 bytes of the paket cross a 4k boundary) + */ + if ((((unsigned long) hdr) & (~(PAGE_SIZE - 1))) != + (((unsigned long) hdr + size + + QETH_IP_HEADER_SIZE) & (~(PAGE_SIZE - 1)))) { + PRINT_ERR("Misaligned packet on interface %s. Discarded.", + QETH_CARD_IFNAME(card)); + return NULL; + } + return hdr; +} + +static inline int qeth_get_ip_version(struct sk_buff *skb) +{ + switch (skb->protocol) { + case ETH_P_IPV6: + return 6; + case ETH_P_IP: + return 4; + default: + return 0; + } +} + +struct qeth_eddp_context; +extern struct ccwgroup_driver qeth_l2_ccwgroup_driver; +extern struct ccwgroup_driver qeth_l3_ccwgroup_driver; +const char *qeth_get_cardname_short(struct qeth_card *); +int qeth_realloc_buffer_pool(struct qeth_card *, int); +int qeth_core_load_discipline(struct qeth_card *, enum qeth_discipline_id); +void qeth_core_free_discipline(struct qeth_card *); +int qeth_core_create_device_attributes(struct device *); +void qeth_core_remove_device_attributes(struct device *); +int qeth_core_create_osn_attributes(struct device *); +void qeth_core_remove_osn_attributes(struct device *); + +/* exports for qeth discipline device drivers */ +extern struct qeth_card_list_struct qeth_core_card_list; +extern debug_info_t *qeth_dbf_setup; +extern debug_info_t *qeth_dbf_data; +extern debug_info_t *qeth_dbf_misc; +extern debug_info_t *qeth_dbf_control; +extern debug_info_t *qeth_dbf_trace; +extern debug_info_t *qeth_dbf_sense; +extern debug_info_t *qeth_dbf_qerr; + +void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int); +int qeth_threads_running(struct qeth_card *, unsigned long); +int qeth_wait_for_threads(struct qeth_card *, unsigned long); +int qeth_do_run_thread(struct qeth_card *, unsigned long); +void qeth_clear_thread_start_bit(struct qeth_card *, unsigned long); +void qeth_clear_thread_running_bit(struct qeth_card *, unsigned long); +int qeth_core_hardsetup_card(struct qeth_card *); +void qeth_print_status_message(struct qeth_card *); +int qeth_init_qdio_queues(struct qeth_card *); +int qeth_send_startlan(struct qeth_card *); +int qeth_send_stoplan(struct qeth_card *); +int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *, + int (*reply_cb) + (struct qeth_card *, struct qeth_reply *, unsigned long), + void *); +struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *, + enum qeth_ipa_cmds, enum qeth_prot_versions); +int qeth_query_setadapterparms(struct qeth_card *); +int qeth_check_qdio_errors(struct qdio_buffer *, unsigned int, + unsigned int, const char *); +void qeth_put_buffer_pool_entry(struct qeth_card *, + struct qeth_buffer_pool_entry *); +void qeth_queue_input_buffer(struct qeth_card *, int); +struct sk_buff *qeth_core_get_next_skb(struct qeth_card *, + struct qdio_buffer *, struct qdio_buffer_element **, int *, + struct qeth_hdr **); +void qeth_schedule_recovery(struct qeth_card *); +void qeth_qdio_output_handler(struct ccw_device *, unsigned int, + unsigned int, unsigned int, + unsigned int, int, int, + unsigned long); +void qeth_clear_ipacmd_list(struct qeth_card *); +int qeth_qdio_clear_card(struct qeth_card *, int); +void qeth_clear_working_pool_list(struct qeth_card *); +void qeth_clear_cmd_buffers(struct qeth_channel *); +void qeth_clear_qdio_buffers(struct qeth_card *); +void qeth_setadp_promisc_mode(struct qeth_card *); +struct net_device_stats *qeth_get_stats(struct net_device *); +int qeth_change_mtu(struct net_device *, int); +int qeth_setadpparms_change_macaddr(struct qeth_card *); +void qeth_tx_timeout(struct net_device *); +void qeth_prepare_control_data(struct qeth_card *, int, + struct qeth_cmd_buffer *); +void qeth_release_buffer(struct qeth_channel *, struct qeth_cmd_buffer *); +void qeth_prepare_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *, char); +struct qeth_cmd_buffer *qeth_wait_for_buffer(struct qeth_channel *); +int qeth_mdio_read(struct net_device *, int, int); +int qeth_snmp_command(struct qeth_card *, char __user *); +int qeth_set_large_send(struct qeth_card *, enum qeth_large_send_types); +struct qeth_cmd_buffer *qeth_get_adapter_cmd(struct qeth_card *, __u32, __u32); +int qeth_default_setadapterparms_cb(struct qeth_card *, struct qeth_reply *, + unsigned long); +int qeth_send_control_data(struct qeth_card *, int, struct qeth_cmd_buffer *, + int (*reply_cb)(struct qeth_card *, struct qeth_reply*, unsigned long), + void *reply_param); +int qeth_get_cast_type(struct qeth_card *, struct sk_buff *); +int qeth_get_priority_queue(struct qeth_card *, struct sk_buff *, int, int); +struct sk_buff *qeth_prepare_skb(struct qeth_card *, struct sk_buff *, + struct qeth_hdr **); +int qeth_get_elements_no(struct qeth_card *, void *, struct sk_buff *, int); +int qeth_do_send_packet_fast(struct qeth_card *, struct qeth_qdio_out_q *, + struct sk_buff *, struct qeth_hdr *, int, + struct qeth_eddp_context *); +int qeth_do_send_packet(struct qeth_card *, struct qeth_qdio_out_q *, + struct sk_buff *, struct qeth_hdr *, + int, struct qeth_eddp_context *); +int qeth_core_get_stats_count(struct net_device *); +void qeth_core_get_ethtool_stats(struct net_device *, + struct ethtool_stats *, u64 *); +void qeth_core_get_strings(struct net_device *, u32, u8 *); +void qeth_core_get_drvinfo(struct net_device *, struct ethtool_drvinfo *); + +/* exports for OSN */ +int qeth_osn_assist(struct net_device *, void *, int); +int qeth_osn_register(unsigned char *read_dev_no, struct net_device **, + int (*assist_cb)(struct net_device *, void *), + int (*data_cb)(struct sk_buff *)); +void qeth_osn_deregister(struct net_device *); + +#endif /* __QETH_CORE_H__ */ diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c new file mode 100644 index 000000000000..95c6fcf58953 --- /dev/null +++ b/drivers/s390/net/qeth_core_main.c @@ -0,0 +1,4540 @@ +/* + * drivers/s390/net/qeth_core_main.c + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "qeth_core.h" +#include "qeth_core_offl.h" + +#define QETH_DBF_TEXT_(name, level, text...) \ + do { \ + if (qeth_dbf_passes(qeth_dbf_##name, level)) { \ + char *dbf_txt_buf = \ + get_cpu_var(qeth_core_dbf_txt_buf); \ + sprintf(dbf_txt_buf, text); \ + debug_text_event(qeth_dbf_##name, level, dbf_txt_buf); \ + put_cpu_var(qeth_core_dbf_txt_buf); \ + } \ + } while (0) + +struct qeth_card_list_struct qeth_core_card_list; +EXPORT_SYMBOL_GPL(qeth_core_card_list); +debug_info_t *qeth_dbf_setup; +EXPORT_SYMBOL_GPL(qeth_dbf_setup); +debug_info_t *qeth_dbf_data; +EXPORT_SYMBOL_GPL(qeth_dbf_data); +debug_info_t *qeth_dbf_misc; +EXPORT_SYMBOL_GPL(qeth_dbf_misc); +debug_info_t *qeth_dbf_control; +EXPORT_SYMBOL_GPL(qeth_dbf_control); +debug_info_t *qeth_dbf_trace; +EXPORT_SYMBOL_GPL(qeth_dbf_trace); +debug_info_t *qeth_dbf_sense; +EXPORT_SYMBOL_GPL(qeth_dbf_sense); +debug_info_t *qeth_dbf_qerr; +EXPORT_SYMBOL_GPL(qeth_dbf_qerr); + +static struct device *qeth_core_root_dev; +static unsigned int known_devices[][10] = QETH_MODELLIST_ARRAY; +static struct lock_class_key qdio_out_skb_queue_key; +static DEFINE_PER_CPU(char[256], qeth_core_dbf_txt_buf); + +static void qeth_send_control_data_cb(struct qeth_channel *, + struct qeth_cmd_buffer *); +static int qeth_issue_next_read(struct qeth_card *); +static struct qeth_cmd_buffer *qeth_get_buffer(struct qeth_channel *); +static void qeth_setup_ccw(struct qeth_channel *, unsigned char *, __u32); +static void qeth_free_buffer_pool(struct qeth_card *); +static int qeth_qdio_establish(struct qeth_card *); + + +static inline void __qeth_fill_buffer_frag(struct sk_buff *skb, + struct qdio_buffer *buffer, int is_tso, + int *next_element_to_fill) +{ + struct skb_frag_struct *frag; + int fragno; + unsigned long addr; + int element, cnt, dlen; + + fragno = skb_shinfo(skb)->nr_frags; + element = *next_element_to_fill; + dlen = 0; + + if (is_tso) + buffer->element[element].flags = + SBAL_FLAGS_MIDDLE_FRAG; + else + buffer->element[element].flags = + SBAL_FLAGS_FIRST_FRAG; + dlen = skb->len - skb->data_len; + if (dlen) { + buffer->element[element].addr = skb->data; + buffer->element[element].length = dlen; + element++; + } + for (cnt = 0; cnt < fragno; cnt++) { + frag = &skb_shinfo(skb)->frags[cnt]; + addr = (page_to_pfn(frag->page) << PAGE_SHIFT) + + frag->page_offset; + buffer->element[element].addr = (char *)addr; + buffer->element[element].length = frag->size; + if (cnt < (fragno - 1)) + buffer->element[element].flags = + SBAL_FLAGS_MIDDLE_FRAG; + else + buffer->element[element].flags = + SBAL_FLAGS_LAST_FRAG; + element++; + } + *next_element_to_fill = element; +} + +static inline const char *qeth_get_cardname(struct qeth_card *card) +{ + if (card->info.guestlan) { + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + return " Guest LAN QDIO"; + case QETH_CARD_TYPE_IQD: + return " Guest LAN Hiper"; + default: + return " unknown"; + } + } else { + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + return " OSD Express"; + case QETH_CARD_TYPE_IQD: + return " HiperSockets"; + case QETH_CARD_TYPE_OSN: + return " OSN QDIO"; + default: + return " unknown"; + } + } + return " n/a"; +} + +/* max length to be returned: 14 */ +const char *qeth_get_cardname_short(struct qeth_card *card) +{ + if (card->info.guestlan) { + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + return "GuestLAN QDIO"; + case QETH_CARD_TYPE_IQD: + return "GuestLAN Hiper"; + default: + return "unknown"; + } + } else { + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + switch (card->info.link_type) { + case QETH_LINK_TYPE_FAST_ETH: + return "OSD_100"; + case QETH_LINK_TYPE_HSTR: + return "HSTR"; + case QETH_LINK_TYPE_GBIT_ETH: + return "OSD_1000"; + case QETH_LINK_TYPE_10GBIT_ETH: + return "OSD_10GIG"; + case QETH_LINK_TYPE_LANE_ETH100: + return "OSD_FE_LANE"; + case QETH_LINK_TYPE_LANE_TR: + return "OSD_TR_LANE"; + case QETH_LINK_TYPE_LANE_ETH1000: + return "OSD_GbE_LANE"; + case QETH_LINK_TYPE_LANE: + return "OSD_ATM_LANE"; + default: + return "OSD_Express"; + } + case QETH_CARD_TYPE_IQD: + return "HiperSockets"; + case QETH_CARD_TYPE_OSN: + return "OSN"; + default: + return "unknown"; + } + } + return "n/a"; +} + +void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads, + int clear_start_mask) +{ + unsigned long flags; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + card->thread_allowed_mask = threads; + if (clear_start_mask) + card->thread_start_mask &= threads; + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + wake_up(&card->wait_q); +} +EXPORT_SYMBOL_GPL(qeth_set_allowed_threads); + +int qeth_threads_running(struct qeth_card *card, unsigned long threads) +{ + unsigned long flags; + int rc = 0; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + rc = (card->thread_running_mask & threads); + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_threads_running); + +int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads) +{ + return wait_event_interruptible(card->wait_q, + qeth_threads_running(card, threads) == 0); +} +EXPORT_SYMBOL_GPL(qeth_wait_for_threads); + +void qeth_clear_working_pool_list(struct qeth_card *card) +{ + struct qeth_buffer_pool_entry *pool_entry, *tmp; + + QETH_DBF_TEXT(trace, 5, "clwrklst"); + list_for_each_entry_safe(pool_entry, tmp, + &card->qdio.in_buf_pool.entry_list, list){ + list_del(&pool_entry->list); + } +} +EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list); + +static int qeth_alloc_buffer_pool(struct qeth_card *card) +{ + struct qeth_buffer_pool_entry *pool_entry; + void *ptr; + int i, j; + + QETH_DBF_TEXT(trace, 5, "alocpool"); + for (i = 0; i < card->qdio.init_pool.buf_count; ++i) { + pool_entry = kmalloc(sizeof(*pool_entry), GFP_KERNEL); + if (!pool_entry) { + qeth_free_buffer_pool(card); + return -ENOMEM; + } + for (j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j) { + ptr = (void *) __get_free_page(GFP_KERNEL|GFP_DMA); + if (!ptr) { + while (j > 0) + free_page((unsigned long) + pool_entry->elements[--j]); + kfree(pool_entry); + qeth_free_buffer_pool(card); + return -ENOMEM; + } + pool_entry->elements[j] = ptr; + } + list_add(&pool_entry->init_list, + &card->qdio.init_pool.entry_list); + } + return 0; +} + +int qeth_realloc_buffer_pool(struct qeth_card *card, int bufcnt) +{ + QETH_DBF_TEXT(trace, 2, "realcbp"); + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + /* TODO: steel/add buffers from/to a running card's buffer pool (?) */ + qeth_clear_working_pool_list(card); + qeth_free_buffer_pool(card); + card->qdio.in_buf_pool.buf_count = bufcnt; + card->qdio.init_pool.buf_count = bufcnt; + return qeth_alloc_buffer_pool(card); +} + +int qeth_set_large_send(struct qeth_card *card, + enum qeth_large_send_types type) +{ + int rc = 0; + + if (card->dev == NULL) { + card->options.large_send = type; + return 0; + } + if (card->state == CARD_STATE_UP) + netif_tx_disable(card->dev); + card->options.large_send = type; + switch (card->options.large_send) { + case QETH_LARGE_SEND_EDDP: + card->dev->features |= NETIF_F_TSO | NETIF_F_SG | + NETIF_F_HW_CSUM; + break; + case QETH_LARGE_SEND_TSO: + if (qeth_is_supported(card, IPA_OUTBOUND_TSO)) { + card->dev->features |= NETIF_F_TSO | NETIF_F_SG | + NETIF_F_HW_CSUM; + } else { + PRINT_WARN("TSO not supported on %s. " + "large_send set to 'no'.\n", + card->dev->name); + card->dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | + NETIF_F_HW_CSUM); + card->options.large_send = QETH_LARGE_SEND_NO; + rc = -EOPNOTSUPP; + } + break; + default: /* includes QETH_LARGE_SEND_NO */ + card->dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | + NETIF_F_HW_CSUM); + break; + } + if (card->state == CARD_STATE_UP) + netif_wake_queue(card->dev); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_set_large_send); + +static int qeth_issue_next_read(struct qeth_card *card) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(trace, 5, "issnxrd"); + if (card->read.state != CH_STATE_UP) + return -EIO; + iob = qeth_get_buffer(&card->read); + if (!iob) { + PRINT_WARN("issue_next_read failed: no iob available!\n"); + return -ENOMEM; + } + qeth_setup_ccw(&card->read, iob->data, QETH_BUFSIZE); + QETH_DBF_TEXT(trace, 6, "noirqpnd"); + rc = ccw_device_start(card->read.ccwdev, &card->read.ccw, + (addr_t) iob, 0, 0); + if (rc) { + PRINT_ERR("Error in starting next read ccw! rc=%i\n", rc); + atomic_set(&card->read.irq_pending, 0); + qeth_schedule_recovery(card); + wake_up(&card->wait_q); + } + return rc; +} + +static struct qeth_reply *qeth_alloc_reply(struct qeth_card *card) +{ + struct qeth_reply *reply; + + reply = kzalloc(sizeof(struct qeth_reply), GFP_ATOMIC); + if (reply) { + atomic_set(&reply->refcnt, 1); + atomic_set(&reply->received, 0); + reply->card = card; + }; + return reply; +} + +static void qeth_get_reply(struct qeth_reply *reply) +{ + WARN_ON(atomic_read(&reply->refcnt) <= 0); + atomic_inc(&reply->refcnt); +} + +static void qeth_put_reply(struct qeth_reply *reply) +{ + WARN_ON(atomic_read(&reply->refcnt) <= 0); + if (atomic_dec_and_test(&reply->refcnt)) + kfree(reply); +} + +static void qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, + struct qeth_card *card) +{ + int rc; + int com; + char *ipa_name; + + com = cmd->hdr.command; + rc = cmd->hdr.return_code; + ipa_name = qeth_get_ipa_cmd_name(com); + + PRINT_ERR("%s(x%X) for %s returned x%X \"%s\"\n", ipa_name, com, + QETH_CARD_IFNAME(card), rc, qeth_get_ipa_msg(rc)); +} + +static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, + struct qeth_cmd_buffer *iob) +{ + struct qeth_ipa_cmd *cmd = NULL; + + QETH_DBF_TEXT(trace, 5, "chkipad"); + if (IS_IPA(iob->data)) { + cmd = (struct qeth_ipa_cmd *) PDU_ENCAPSULATION(iob->data); + if (IS_IPA_REPLY(cmd)) { + if (cmd->hdr.return_code && + (cmd->hdr.command < IPA_CMD_SETCCID || + cmd->hdr.command > IPA_CMD_MODCCID)) + qeth_issue_ipa_msg(cmd, card); + return cmd; + } else { + switch (cmd->hdr.command) { + case IPA_CMD_STOPLAN: + PRINT_WARN("Link failure on %s (CHPID 0x%X) - " + "there is a network problem or " + "someone pulled the cable or " + "disabled the port.\n", + QETH_CARD_IFNAME(card), + card->info.chpid); + card->lan_online = 0; + if (card->dev && netif_carrier_ok(card->dev)) + netif_carrier_off(card->dev); + return NULL; + case IPA_CMD_STARTLAN: + PRINT_INFO("Link reestablished on %s " + "(CHPID 0x%X). Scheduling " + "IP address reset.\n", + QETH_CARD_IFNAME(card), + card->info.chpid); + netif_carrier_on(card->dev); + qeth_schedule_recovery(card); + return NULL; + case IPA_CMD_MODCCID: + return cmd; + case IPA_CMD_REGISTER_LOCAL_ADDR: + QETH_DBF_TEXT(trace, 3, "irla"); + break; + case IPA_CMD_UNREGISTER_LOCAL_ADDR: + QETH_DBF_TEXT(trace, 3, "urla"); + break; + default: + PRINT_WARN("Received data is IPA " + "but not a reply!\n"); + break; + } + } + } + return cmd; +} + +void qeth_clear_ipacmd_list(struct qeth_card *card) +{ + struct qeth_reply *reply, *r; + unsigned long flags; + + QETH_DBF_TEXT(trace, 4, "clipalst"); + + spin_lock_irqsave(&card->lock, flags); + list_for_each_entry_safe(reply, r, &card->cmd_waiter_list, list) { + qeth_get_reply(reply); + reply->rc = -EIO; + atomic_inc(&reply->received); + list_del_init(&reply->list); + wake_up(&reply->wait_q); + qeth_put_reply(reply); + } + spin_unlock_irqrestore(&card->lock, flags); +} +EXPORT_SYMBOL_GPL(qeth_clear_ipacmd_list); + +static int qeth_check_idx_response(unsigned char *buffer) +{ + if (!buffer) + return 0; + + QETH_DBF_HEX(control, 2, buffer, QETH_DBF_CONTROL_LEN); + if ((buffer[2] & 0xc0) == 0xc0) { + PRINT_WARN("received an IDX TERMINATE " + "with cause code 0x%02x%s\n", + buffer[4], + ((buffer[4] == 0x22) ? + " -- try another portname" : "")); + QETH_DBF_TEXT(trace, 2, "ckidxres"); + QETH_DBF_TEXT(trace, 2, " idxterm"); + QETH_DBF_TEXT_(trace, 2, " rc%d", -EIO); + return -EIO; + } + return 0; +} + +static void qeth_setup_ccw(struct qeth_channel *channel, unsigned char *iob, + __u32 len) +{ + struct qeth_card *card; + + QETH_DBF_TEXT(trace, 4, "setupccw"); + card = CARD_FROM_CDEV(channel->ccwdev); + if (channel == &card->read) + memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1)); + else + memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1)); + channel->ccw.count = len; + channel->ccw.cda = (__u32) __pa(iob); +} + +static struct qeth_cmd_buffer *__qeth_get_buffer(struct qeth_channel *channel) +{ + __u8 index; + + QETH_DBF_TEXT(trace, 6, "getbuff"); + index = channel->io_buf_no; + do { + if (channel->iob[index].state == BUF_STATE_FREE) { + channel->iob[index].state = BUF_STATE_LOCKED; + channel->io_buf_no = (channel->io_buf_no + 1) % + QETH_CMD_BUFFER_NO; + memset(channel->iob[index].data, 0, QETH_BUFSIZE); + return channel->iob + index; + } + index = (index + 1) % QETH_CMD_BUFFER_NO; + } while (index != channel->io_buf_no); + + return NULL; +} + +void qeth_release_buffer(struct qeth_channel *channel, + struct qeth_cmd_buffer *iob) +{ + unsigned long flags; + + QETH_DBF_TEXT(trace, 6, "relbuff"); + spin_lock_irqsave(&channel->iob_lock, flags); + memset(iob->data, 0, QETH_BUFSIZE); + iob->state = BUF_STATE_FREE; + iob->callback = qeth_send_control_data_cb; + iob->rc = 0; + spin_unlock_irqrestore(&channel->iob_lock, flags); +} +EXPORT_SYMBOL_GPL(qeth_release_buffer); + +static struct qeth_cmd_buffer *qeth_get_buffer(struct qeth_channel *channel) +{ + struct qeth_cmd_buffer *buffer = NULL; + unsigned long flags; + + spin_lock_irqsave(&channel->iob_lock, flags); + buffer = __qeth_get_buffer(channel); + spin_unlock_irqrestore(&channel->iob_lock, flags); + return buffer; +} + +struct qeth_cmd_buffer *qeth_wait_for_buffer(struct qeth_channel *channel) +{ + struct qeth_cmd_buffer *buffer; + wait_event(channel->wait_q, + ((buffer = qeth_get_buffer(channel)) != NULL)); + return buffer; +} +EXPORT_SYMBOL_GPL(qeth_wait_for_buffer); + +void qeth_clear_cmd_buffers(struct qeth_channel *channel) +{ + int cnt; + + for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) + qeth_release_buffer(channel, &channel->iob[cnt]); + channel->buf_no = 0; + channel->io_buf_no = 0; +} +EXPORT_SYMBOL_GPL(qeth_clear_cmd_buffers); + +static void qeth_send_control_data_cb(struct qeth_channel *channel, + struct qeth_cmd_buffer *iob) +{ + struct qeth_card *card; + struct qeth_reply *reply, *r; + struct qeth_ipa_cmd *cmd; + unsigned long flags; + int keep_reply; + + QETH_DBF_TEXT(trace, 4, "sndctlcb"); + + card = CARD_FROM_CDEV(channel->ccwdev); + if (qeth_check_idx_response(iob->data)) { + qeth_clear_ipacmd_list(card); + qeth_schedule_recovery(card); + goto out; + } + + cmd = qeth_check_ipa_data(card, iob); + if ((cmd == NULL) && (card->state != CARD_STATE_DOWN)) + goto out; + /*in case of OSN : check if cmd is set */ + if (card->info.type == QETH_CARD_TYPE_OSN && + cmd && + cmd->hdr.command != IPA_CMD_STARTLAN && + card->osn_info.assist_cb != NULL) { + card->osn_info.assist_cb(card->dev, cmd); + goto out; + } + + spin_lock_irqsave(&card->lock, flags); + list_for_each_entry_safe(reply, r, &card->cmd_waiter_list, list) { + if ((reply->seqno == QETH_IDX_COMMAND_SEQNO) || + ((cmd) && (reply->seqno == cmd->hdr.seqno))) { + qeth_get_reply(reply); + list_del_init(&reply->list); + spin_unlock_irqrestore(&card->lock, flags); + keep_reply = 0; + if (reply->callback != NULL) { + if (cmd) { + reply->offset = (__u16)((char *)cmd - + (char *)iob->data); + keep_reply = reply->callback(card, + reply, + (unsigned long)cmd); + } else + keep_reply = reply->callback(card, + reply, + (unsigned long)iob); + } + if (cmd) + reply->rc = (u16) cmd->hdr.return_code; + else if (iob->rc) + reply->rc = iob->rc; + if (keep_reply) { + spin_lock_irqsave(&card->lock, flags); + list_add_tail(&reply->list, + &card->cmd_waiter_list); + spin_unlock_irqrestore(&card->lock, flags); + } else { + atomic_inc(&reply->received); + wake_up(&reply->wait_q); + } + qeth_put_reply(reply); + goto out; + } + } + spin_unlock_irqrestore(&card->lock, flags); +out: + memcpy(&card->seqno.pdu_hdr_ack, + QETH_PDU_HEADER_SEQ_NO(iob->data), + QETH_SEQ_NO_LENGTH); + qeth_release_buffer(channel, iob); +} + +static int qeth_setup_channel(struct qeth_channel *channel) +{ + int cnt; + + QETH_DBF_TEXT(setup, 2, "setupch"); + for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) { + channel->iob[cnt].data = (char *) + kmalloc(QETH_BUFSIZE, GFP_DMA|GFP_KERNEL); + if (channel->iob[cnt].data == NULL) + break; + channel->iob[cnt].state = BUF_STATE_FREE; + channel->iob[cnt].channel = channel; + channel->iob[cnt].callback = qeth_send_control_data_cb; + channel->iob[cnt].rc = 0; + } + if (cnt < QETH_CMD_BUFFER_NO) { + while (cnt-- > 0) + kfree(channel->iob[cnt].data); + return -ENOMEM; + } + channel->buf_no = 0; + channel->io_buf_no = 0; + atomic_set(&channel->irq_pending, 0); + spin_lock_init(&channel->iob_lock); + + init_waitqueue_head(&channel->wait_q); + return 0; +} + +static int qeth_set_thread_start_bit(struct qeth_card *card, + unsigned long thread) +{ + unsigned long flags; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + if (!(card->thread_allowed_mask & thread) || + (card->thread_start_mask & thread)) { + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + return -EPERM; + } + card->thread_start_mask |= thread; + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + return 0; +} + +void qeth_clear_thread_start_bit(struct qeth_card *card, unsigned long thread) +{ + unsigned long flags; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + card->thread_start_mask &= ~thread; + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + wake_up(&card->wait_q); +} +EXPORT_SYMBOL_GPL(qeth_clear_thread_start_bit); + +void qeth_clear_thread_running_bit(struct qeth_card *card, unsigned long thread) +{ + unsigned long flags; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + card->thread_running_mask &= ~thread; + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + wake_up(&card->wait_q); +} +EXPORT_SYMBOL_GPL(qeth_clear_thread_running_bit); + +static int __qeth_do_run_thread(struct qeth_card *card, unsigned long thread) +{ + unsigned long flags; + int rc = 0; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + if (card->thread_start_mask & thread) { + if ((card->thread_allowed_mask & thread) && + !(card->thread_running_mask & thread)) { + rc = 1; + card->thread_start_mask &= ~thread; + card->thread_running_mask |= thread; + } else + rc = -EPERM; + } + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + return rc; +} + +int qeth_do_run_thread(struct qeth_card *card, unsigned long thread) +{ + int rc = 0; + + wait_event(card->wait_q, + (rc = __qeth_do_run_thread(card, thread)) >= 0); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_do_run_thread); + +void qeth_schedule_recovery(struct qeth_card *card) +{ + QETH_DBF_TEXT(trace, 2, "startrec"); + if (qeth_set_thread_start_bit(card, QETH_RECOVER_THREAD) == 0) + schedule_work(&card->kernel_thread_starter); +} +EXPORT_SYMBOL_GPL(qeth_schedule_recovery); + +static int qeth_get_problem(struct ccw_device *cdev, struct irb *irb) +{ + int dstat, cstat; + char *sense; + + sense = (char *) irb->ecw; + cstat = irb->scsw.cstat; + dstat = irb->scsw.dstat; + + if (cstat & (SCHN_STAT_CHN_CTRL_CHK | SCHN_STAT_INTF_CTRL_CHK | + SCHN_STAT_CHN_DATA_CHK | SCHN_STAT_CHAIN_CHECK | + SCHN_STAT_PROT_CHECK | SCHN_STAT_PROG_CHECK)) { + QETH_DBF_TEXT(trace, 2, "CGENCHK"); + PRINT_WARN("check on device %s, dstat=x%x, cstat=x%x ", + cdev->dev.bus_id, dstat, cstat); + print_hex_dump(KERN_WARNING, "qeth: irb ", DUMP_PREFIX_OFFSET, + 16, 1, irb, 64, 1); + return 1; + } + + if (dstat & DEV_STAT_UNIT_CHECK) { + if (sense[SENSE_RESETTING_EVENT_BYTE] & + SENSE_RESETTING_EVENT_FLAG) { + QETH_DBF_TEXT(trace, 2, "REVIND"); + return 1; + } + if (sense[SENSE_COMMAND_REJECT_BYTE] & + SENSE_COMMAND_REJECT_FLAG) { + QETH_DBF_TEXT(trace, 2, "CMDREJi"); + return 0; + } + if ((sense[2] == 0xaf) && (sense[3] == 0xfe)) { + QETH_DBF_TEXT(trace, 2, "AFFE"); + return 1; + } + if ((!sense[0]) && (!sense[1]) && (!sense[2]) && (!sense[3])) { + QETH_DBF_TEXT(trace, 2, "ZEROSEN"); + return 0; + } + QETH_DBF_TEXT(trace, 2, "DGENCHK"); + return 1; + } + return 0; +} + +static long __qeth_check_irb_error(struct ccw_device *cdev, + unsigned long intparm, struct irb *irb) +{ + if (!IS_ERR(irb)) + return 0; + + switch (PTR_ERR(irb)) { + case -EIO: + PRINT_WARN("i/o-error on device %s\n", cdev->dev.bus_id); + QETH_DBF_TEXT(trace, 2, "ckirberr"); + QETH_DBF_TEXT_(trace, 2, " rc%d", -EIO); + break; + case -ETIMEDOUT: + PRINT_WARN("timeout on device %s\n", cdev->dev.bus_id); + QETH_DBF_TEXT(trace, 2, "ckirberr"); + QETH_DBF_TEXT_(trace, 2, " rc%d", -ETIMEDOUT); + if (intparm == QETH_RCD_PARM) { + struct qeth_card *card = CARD_FROM_CDEV(cdev); + + if (card && (card->data.ccwdev == cdev)) { + card->data.state = CH_STATE_DOWN; + wake_up(&card->wait_q); + } + } + break; + default: + PRINT_WARN("unknown error %ld on device %s\n", PTR_ERR(irb), + cdev->dev.bus_id); + QETH_DBF_TEXT(trace, 2, "ckirberr"); + QETH_DBF_TEXT(trace, 2, " rc???"); + } + return PTR_ERR(irb); +} + +static void qeth_irq(struct ccw_device *cdev, unsigned long intparm, + struct irb *irb) +{ + int rc; + int cstat, dstat; + struct qeth_cmd_buffer *buffer; + struct qeth_channel *channel; + struct qeth_card *card; + struct qeth_cmd_buffer *iob; + __u8 index; + + QETH_DBF_TEXT(trace, 5, "irq"); + + if (__qeth_check_irb_error(cdev, intparm, irb)) + return; + cstat = irb->scsw.cstat; + dstat = irb->scsw.dstat; + + card = CARD_FROM_CDEV(cdev); + if (!card) + return; + + if (card->read.ccwdev == cdev) { + channel = &card->read; + QETH_DBF_TEXT(trace, 5, "read"); + } else if (card->write.ccwdev == cdev) { + channel = &card->write; + QETH_DBF_TEXT(trace, 5, "write"); + } else { + channel = &card->data; + QETH_DBF_TEXT(trace, 5, "data"); + } + atomic_set(&channel->irq_pending, 0); + + if (irb->scsw.fctl & (SCSW_FCTL_CLEAR_FUNC)) + channel->state = CH_STATE_STOPPED; + + if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC)) + channel->state = CH_STATE_HALTED; + + /*let's wake up immediately on data channel*/ + if ((channel == &card->data) && (intparm != 0) && + (intparm != QETH_RCD_PARM)) + goto out; + + if (intparm == QETH_CLEAR_CHANNEL_PARM) { + QETH_DBF_TEXT(trace, 6, "clrchpar"); + /* we don't have to handle this further */ + intparm = 0; + } + if (intparm == QETH_HALT_CHANNEL_PARM) { + QETH_DBF_TEXT(trace, 6, "hltchpar"); + /* we don't have to handle this further */ + intparm = 0; + } + if ((dstat & DEV_STAT_UNIT_EXCEP) || + (dstat & DEV_STAT_UNIT_CHECK) || + (cstat)) { + if (irb->esw.esw0.erw.cons) { + /* TODO: we should make this s390dbf */ + PRINT_WARN("sense data available on channel %s.\n", + CHANNEL_ID(channel)); + PRINT_WARN(" cstat 0x%X\n dstat 0x%X\n", cstat, dstat); + print_hex_dump(KERN_WARNING, "qeth: irb ", + DUMP_PREFIX_OFFSET, 16, 1, irb, 32, 1); + print_hex_dump(KERN_WARNING, "qeth: sense data ", + DUMP_PREFIX_OFFSET, 16, 1, irb->ecw, 32, 1); + } + if (intparm == QETH_RCD_PARM) { + channel->state = CH_STATE_DOWN; + goto out; + } + rc = qeth_get_problem(cdev, irb); + if (rc) { + qeth_schedule_recovery(card); + goto out; + } + } + + if (intparm == QETH_RCD_PARM) { + channel->state = CH_STATE_RCD_DONE; + goto out; + } + if (intparm) { + buffer = (struct qeth_cmd_buffer *) __va((addr_t)intparm); + buffer->state = BUF_STATE_PROCESSED; + } + if (channel == &card->data) + return; + if (channel == &card->read && + channel->state == CH_STATE_UP) + qeth_issue_next_read(card); + + iob = channel->iob; + index = channel->buf_no; + while (iob[index].state == BUF_STATE_PROCESSED) { + if (iob[index].callback != NULL) + iob[index].callback(channel, iob + index); + + index = (index + 1) % QETH_CMD_BUFFER_NO; + } + channel->buf_no = index; +out: + wake_up(&card->wait_q); + return; +} + +static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue, + struct qeth_qdio_out_buffer *buf) +{ + int i; + struct sk_buff *skb; + + /* is PCI flag set on buffer? */ + if (buf->buffer->element[0].flags & 0x40) + atomic_dec(&queue->set_pci_flags_count); + + skb = skb_dequeue(&buf->skb_list); + while (skb) { + atomic_dec(&skb->users); + dev_kfree_skb_any(skb); + skb = skb_dequeue(&buf->skb_list); + } + qeth_eddp_buf_release_contexts(buf); + for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(queue->card); ++i) { + buf->buffer->element[i].length = 0; + buf->buffer->element[i].addr = NULL; + buf->buffer->element[i].flags = 0; + } + buf->next_element_to_fill = 0; + atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY); +} + +void qeth_clear_qdio_buffers(struct qeth_card *card) +{ + int i, j; + + QETH_DBF_TEXT(trace, 2, "clearqdbf"); + /* clear outbound buffers to free skbs */ + for (i = 0; i < card->qdio.no_out_queues; ++i) + if (card->qdio.out_qs[i]) { + for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) + qeth_clear_output_buffer(card->qdio.out_qs[i], + &card->qdio.out_qs[i]->bufs[j]); + } +} +EXPORT_SYMBOL_GPL(qeth_clear_qdio_buffers); + +static void qeth_free_buffer_pool(struct qeth_card *card) +{ + struct qeth_buffer_pool_entry *pool_entry, *tmp; + int i = 0; + QETH_DBF_TEXT(trace, 5, "freepool"); + list_for_each_entry_safe(pool_entry, tmp, + &card->qdio.init_pool.entry_list, init_list){ + for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) + free_page((unsigned long)pool_entry->elements[i]); + list_del(&pool_entry->init_list); + kfree(pool_entry); + } +} + +static void qeth_free_qdio_buffers(struct qeth_card *card) +{ + int i, j; + + QETH_DBF_TEXT(trace, 2, "freeqdbf"); + if (atomic_xchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == + QETH_QDIO_UNINITIALIZED) + return; + kfree(card->qdio.in_q); + card->qdio.in_q = NULL; + /* inbound buffer pool */ + qeth_free_buffer_pool(card); + /* free outbound qdio_qs */ + if (card->qdio.out_qs) { + for (i = 0; i < card->qdio.no_out_queues; ++i) { + for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) + qeth_clear_output_buffer(card->qdio.out_qs[i], + &card->qdio.out_qs[i]->bufs[j]); + kfree(card->qdio.out_qs[i]); + } + kfree(card->qdio.out_qs); + card->qdio.out_qs = NULL; + } +} + +static void qeth_clean_channel(struct qeth_channel *channel) +{ + int cnt; + + QETH_DBF_TEXT(setup, 2, "freech"); + for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) + kfree(channel->iob[cnt].data); +} + +static int qeth_is_1920_device(struct qeth_card *card) +{ + int single_queue = 0; + struct ccw_device *ccwdev; + struct channelPath_dsc { + u8 flags; + u8 lsn; + u8 desc; + u8 chpid; + u8 swla; + u8 zeroes; + u8 chla; + u8 chpp; + } *chp_dsc; + + QETH_DBF_TEXT(setup, 2, "chk_1920"); + + ccwdev = card->data.ccwdev; + chp_dsc = (struct channelPath_dsc *)ccw_device_get_chp_desc(ccwdev, 0); + if (chp_dsc != NULL) { + /* CHPP field bit 6 == 1 -> single queue */ + single_queue = ((chp_dsc->chpp & 0x02) == 0x02); + kfree(chp_dsc); + } + QETH_DBF_TEXT_(setup, 2, "rc:%x", single_queue); + return single_queue; +} + +static void qeth_init_qdio_info(struct qeth_card *card) +{ + QETH_DBF_TEXT(setup, 4, "intqdinf"); + atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); + /* inbound */ + card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT; + card->qdio.init_pool.buf_count = QETH_IN_BUF_COUNT_DEFAULT; + card->qdio.in_buf_pool.buf_count = card->qdio.init_pool.buf_count; + INIT_LIST_HEAD(&card->qdio.in_buf_pool.entry_list); + INIT_LIST_HEAD(&card->qdio.init_pool.entry_list); +} + +static void qeth_set_intial_options(struct qeth_card *card) +{ + card->options.route4.type = NO_ROUTER; + card->options.route6.type = NO_ROUTER; + card->options.checksum_type = QETH_CHECKSUM_DEFAULT; + card->options.broadcast_mode = QETH_TR_BROADCAST_ALLRINGS; + card->options.macaddr_mode = QETH_TR_MACADDR_NONCANONICAL; + card->options.fake_broadcast = 0; + card->options.add_hhlen = DEFAULT_ADD_HHLEN; + card->options.fake_ll = 0; + card->options.performance_stats = 0; + card->options.rx_sg_cb = QETH_RX_SG_CB; +} + +static int qeth_do_start_thread(struct qeth_card *card, unsigned long thread) +{ + unsigned long flags; + int rc = 0; + + spin_lock_irqsave(&card->thread_mask_lock, flags); + QETH_DBF_TEXT_(trace, 4, " %02x%02x%02x", + (u8) card->thread_start_mask, + (u8) card->thread_allowed_mask, + (u8) card->thread_running_mask); + rc = (card->thread_start_mask & thread); + spin_unlock_irqrestore(&card->thread_mask_lock, flags); + return rc; +} + +static void qeth_start_kernel_thread(struct work_struct *work) +{ + struct qeth_card *card = container_of(work, struct qeth_card, + kernel_thread_starter); + QETH_DBF_TEXT(trace , 2, "strthrd"); + + if (card->read.state != CH_STATE_UP && + card->write.state != CH_STATE_UP) + return; + if (qeth_do_start_thread(card, QETH_RECOVER_THREAD)) + kthread_run(card->discipline.recover, (void *) card, + "qeth_recover"); +} + +static int qeth_setup_card(struct qeth_card *card) +{ + + QETH_DBF_TEXT(setup, 2, "setupcrd"); + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + + card->read.state = CH_STATE_DOWN; + card->write.state = CH_STATE_DOWN; + card->data.state = CH_STATE_DOWN; + card->state = CARD_STATE_DOWN; + card->lan_online = 0; + card->use_hard_stop = 0; + card->dev = NULL; + spin_lock_init(&card->vlanlock); + spin_lock_init(&card->mclock); + card->vlangrp = NULL; + spin_lock_init(&card->lock); + spin_lock_init(&card->ip_lock); + spin_lock_init(&card->thread_mask_lock); + card->thread_start_mask = 0; + card->thread_allowed_mask = 0; + card->thread_running_mask = 0; + INIT_WORK(&card->kernel_thread_starter, qeth_start_kernel_thread); + INIT_LIST_HEAD(&card->ip_list); + card->ip_tbd_list = kmalloc(sizeof(struct list_head), GFP_KERNEL); + if (!card->ip_tbd_list) { + QETH_DBF_TEXT(setup, 0, "iptbdnom"); + return -ENOMEM; + } + INIT_LIST_HEAD(card->ip_tbd_list); + INIT_LIST_HEAD(&card->cmd_waiter_list); + init_waitqueue_head(&card->wait_q); + /* intial options */ + qeth_set_intial_options(card); + /* IP address takeover */ + INIT_LIST_HEAD(&card->ipato.entries); + card->ipato.enabled = 0; + card->ipato.invert4 = 0; + card->ipato.invert6 = 0; + /* init QDIO stuff */ + qeth_init_qdio_info(card); + return 0; +} + +static struct qeth_card *qeth_alloc_card(void) +{ + struct qeth_card *card; + + QETH_DBF_TEXT(setup, 2, "alloccrd"); + card = kzalloc(sizeof(struct qeth_card), GFP_DMA|GFP_KERNEL); + if (!card) + return NULL; + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + if (qeth_setup_channel(&card->read)) { + kfree(card); + return NULL; + } + if (qeth_setup_channel(&card->write)) { + qeth_clean_channel(&card->read); + kfree(card); + return NULL; + } + card->options.layer2 = -1; + return card; +} + +static int qeth_determine_card_type(struct qeth_card *card) +{ + int i = 0; + + QETH_DBF_TEXT(setup, 2, "detcdtyp"); + + card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT; + card->qdio.default_out_queue = QETH_DEFAULT_QUEUE; + while (known_devices[i][4]) { + if ((CARD_RDEV(card)->id.dev_type == known_devices[i][2]) && + (CARD_RDEV(card)->id.dev_model == known_devices[i][3])) { + card->info.type = known_devices[i][4]; + card->qdio.no_out_queues = known_devices[i][8]; + card->info.is_multicast_different = known_devices[i][9]; + if (qeth_is_1920_device(card)) { + PRINT_INFO("Priority Queueing not able " + "due to hardware limitations!\n"); + card->qdio.no_out_queues = 1; + card->qdio.default_out_queue = 0; + } + return 0; + } + i++; + } + card->info.type = QETH_CARD_TYPE_UNKNOWN; + PRINT_ERR("unknown card type on device %s\n", CARD_BUS_ID(card)); + return -ENOENT; +} + +static int qeth_clear_channel(struct qeth_channel *channel) +{ + unsigned long flags; + struct qeth_card *card; + int rc; + + QETH_DBF_TEXT(trace, 3, "clearch"); + card = CARD_FROM_CDEV(channel->ccwdev); + spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); + rc = ccw_device_clear(channel->ccwdev, QETH_CLEAR_CHANNEL_PARM); + spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); + + if (rc) + return rc; + rc = wait_event_interruptible_timeout(card->wait_q, + channel->state == CH_STATE_STOPPED, QETH_TIMEOUT); + if (rc == -ERESTARTSYS) + return rc; + if (channel->state != CH_STATE_STOPPED) + return -ETIME; + channel->state = CH_STATE_DOWN; + return 0; +} + +static int qeth_halt_channel(struct qeth_channel *channel) +{ + unsigned long flags; + struct qeth_card *card; + int rc; + + QETH_DBF_TEXT(trace, 3, "haltch"); + card = CARD_FROM_CDEV(channel->ccwdev); + spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); + rc = ccw_device_halt(channel->ccwdev, QETH_HALT_CHANNEL_PARM); + spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); + + if (rc) + return rc; + rc = wait_event_interruptible_timeout(card->wait_q, + channel->state == CH_STATE_HALTED, QETH_TIMEOUT); + if (rc == -ERESTARTSYS) + return rc; + if (channel->state != CH_STATE_HALTED) + return -ETIME; + return 0; +} + +static int qeth_halt_channels(struct qeth_card *card) +{ + int rc1 = 0, rc2 = 0, rc3 = 0; + + QETH_DBF_TEXT(trace, 3, "haltchs"); + rc1 = qeth_halt_channel(&card->read); + rc2 = qeth_halt_channel(&card->write); + rc3 = qeth_halt_channel(&card->data); + if (rc1) + return rc1; + if (rc2) + return rc2; + return rc3; +} + +static int qeth_clear_channels(struct qeth_card *card) +{ + int rc1 = 0, rc2 = 0, rc3 = 0; + + QETH_DBF_TEXT(trace, 3, "clearchs"); + rc1 = qeth_clear_channel(&card->read); + rc2 = qeth_clear_channel(&card->write); + rc3 = qeth_clear_channel(&card->data); + if (rc1) + return rc1; + if (rc2) + return rc2; + return rc3; +} + +static int qeth_clear_halt_card(struct qeth_card *card, int halt) +{ + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "clhacrd"); + QETH_DBF_HEX(trace, 3, &card, sizeof(void *)); + + if (halt) + rc = qeth_halt_channels(card); + if (rc) + return rc; + return qeth_clear_channels(card); +} + +int qeth_qdio_clear_card(struct qeth_card *card, int use_halt) +{ + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "qdioclr"); + switch (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_ESTABLISHED, + QETH_QDIO_CLEANING)) { + case QETH_QDIO_ESTABLISHED: + if (card->info.type == QETH_CARD_TYPE_IQD) + rc = qdio_cleanup(CARD_DDEV(card), + QDIO_FLAG_CLEANUP_USING_HALT); + else + rc = qdio_cleanup(CARD_DDEV(card), + QDIO_FLAG_CLEANUP_USING_CLEAR); + if (rc) + QETH_DBF_TEXT_(trace, 3, "1err%d", rc); + atomic_set(&card->qdio.state, QETH_QDIO_ALLOCATED); + break; + case QETH_QDIO_CLEANING: + return rc; + default: + break; + } + rc = qeth_clear_halt_card(card, use_halt); + if (rc) + QETH_DBF_TEXT_(trace, 3, "2err%d", rc); + card->state = CARD_STATE_DOWN; + return rc; +} +EXPORT_SYMBOL_GPL(qeth_qdio_clear_card); + +static int qeth_read_conf_data(struct qeth_card *card, void **buffer, + int *length) +{ + struct ciw *ciw; + char *rcd_buf; + int ret; + struct qeth_channel *channel = &card->data; + unsigned long flags; + + /* + * scan for RCD command in extended SenseID data + */ + ciw = ccw_device_get_ciw(channel->ccwdev, CIW_TYPE_RCD); + if (!ciw || ciw->cmd == 0) + return -EOPNOTSUPP; + rcd_buf = kzalloc(ciw->count, GFP_KERNEL | GFP_DMA); + if (!rcd_buf) + return -ENOMEM; + + channel->ccw.cmd_code = ciw->cmd; + channel->ccw.cda = (__u32) __pa(rcd_buf); + channel->ccw.count = ciw->count; + channel->ccw.flags = CCW_FLAG_SLI; + channel->state = CH_STATE_RCD; + spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); + ret = ccw_device_start_timeout(channel->ccwdev, &channel->ccw, + QETH_RCD_PARM, LPM_ANYPATH, 0, + QETH_RCD_TIMEOUT); + spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); + if (!ret) + wait_event(card->wait_q, + (channel->state == CH_STATE_RCD_DONE || + channel->state == CH_STATE_DOWN)); + if (channel->state == CH_STATE_DOWN) + ret = -EIO; + else + channel->state = CH_STATE_DOWN; + if (ret) { + kfree(rcd_buf); + *buffer = NULL; + *length = 0; + } else { + *length = ciw->count; + *buffer = rcd_buf; + } + return ret; +} + +static int qeth_get_unitaddr(struct qeth_card *card) +{ + int length; + char *prcd; + int rc; + + QETH_DBF_TEXT(setup, 2, "getunit"); + rc = qeth_read_conf_data(card, (void **) &prcd, &length); + if (rc) { + PRINT_ERR("qeth_read_conf_data for device %s returned %i\n", + CARD_DDEV_ID(card), rc); + return rc; + } + card->info.chpid = prcd[30]; + card->info.unit_addr2 = prcd[31]; + card->info.cula = prcd[63]; + card->info.guestlan = ((prcd[0x10] == _ascebc['V']) && + (prcd[0x11] == _ascebc['M'])); + kfree(prcd); + return 0; +} + +static void qeth_init_tokens(struct qeth_card *card) +{ + card->token.issuer_rm_w = 0x00010103UL; + card->token.cm_filter_w = 0x00010108UL; + card->token.cm_connection_w = 0x0001010aUL; + card->token.ulp_filter_w = 0x0001010bUL; + card->token.ulp_connection_w = 0x0001010dUL; +} + +static void qeth_init_func_level(struct qeth_card *card) +{ + if (card->ipato.enabled) { + if (card->info.type == QETH_CARD_TYPE_IQD) + card->info.func_level = + QETH_IDX_FUNC_LEVEL_IQD_ENA_IPAT; + else + card->info.func_level = + QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT; + } else { + if (card->info.type == QETH_CARD_TYPE_IQD) + /*FIXME:why do we have same values for dis and ena for + osae??? */ + card->info.func_level = + QETH_IDX_FUNC_LEVEL_IQD_DIS_IPAT; + else + card->info.func_level = + QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT; + } +} + +static inline __u16 qeth_raw_devno_from_bus_id(char *id) +{ + id += (strlen(id) - 4); + return (__u16) simple_strtoul(id, &id, 16); +} + +static int qeth_idx_activate_get_answer(struct qeth_channel *channel, + void (*idx_reply_cb)(struct qeth_channel *, + struct qeth_cmd_buffer *)) +{ + struct qeth_cmd_buffer *iob; + unsigned long flags; + int rc; + struct qeth_card *card; + + QETH_DBF_TEXT(setup, 2, "idxanswr"); + card = CARD_FROM_CDEV(channel->ccwdev); + iob = qeth_get_buffer(channel); + iob->callback = idx_reply_cb; + memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1)); + channel->ccw.count = QETH_BUFSIZE; + channel->ccw.cda = (__u32) __pa(iob->data); + + wait_event(card->wait_q, + atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); + QETH_DBF_TEXT(setup, 6, "noirqpnd"); + spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); + rc = ccw_device_start(channel->ccwdev, + &channel->ccw, (addr_t) iob, 0, 0); + spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); + + if (rc) { + PRINT_ERR("Error2 in activating channel rc=%d\n", rc); + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + atomic_set(&channel->irq_pending, 0); + wake_up(&card->wait_q); + return rc; + } + rc = wait_event_interruptible_timeout(card->wait_q, + channel->state == CH_STATE_UP, QETH_TIMEOUT); + if (rc == -ERESTARTSYS) + return rc; + if (channel->state != CH_STATE_UP) { + rc = -ETIME; + QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + qeth_clear_cmd_buffers(channel); + } else + rc = 0; + return rc; +} + +static int qeth_idx_activate_channel(struct qeth_channel *channel, + void (*idx_reply_cb)(struct qeth_channel *, + struct qeth_cmd_buffer *)) +{ + struct qeth_card *card; + struct qeth_cmd_buffer *iob; + unsigned long flags; + __u16 temp; + __u8 tmp; + int rc; + + card = CARD_FROM_CDEV(channel->ccwdev); + + QETH_DBF_TEXT(setup, 2, "idxactch"); + + iob = qeth_get_buffer(channel); + iob->callback = idx_reply_cb; + memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1)); + channel->ccw.count = IDX_ACTIVATE_SIZE; + channel->ccw.cda = (__u32) __pa(iob->data); + if (channel == &card->write) { + memcpy(iob->data, IDX_ACTIVATE_WRITE, IDX_ACTIVATE_SIZE); + memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), + &card->seqno.trans_hdr, QETH_SEQ_NO_LENGTH); + card->seqno.trans_hdr++; + } else { + memcpy(iob->data, IDX_ACTIVATE_READ, IDX_ACTIVATE_SIZE); + memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), + &card->seqno.trans_hdr, QETH_SEQ_NO_LENGTH); + } + tmp = ((__u8)card->info.portno) | 0x80; + memcpy(QETH_IDX_ACT_PNO(iob->data), &tmp, 1); + memcpy(QETH_IDX_ACT_ISSUER_RM_TOKEN(iob->data), + &card->token.issuer_rm_w, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_IDX_ACT_FUNC_LEVEL(iob->data), + &card->info.func_level, sizeof(__u16)); + temp = qeth_raw_devno_from_bus_id(CARD_DDEV_ID(card)); + memcpy(QETH_IDX_ACT_QDIO_DEV_CUA(iob->data), &temp, 2); + temp = (card->info.cula << 8) + card->info.unit_addr2; + memcpy(QETH_IDX_ACT_QDIO_DEV_REALADDR(iob->data), &temp, 2); + + wait_event(card->wait_q, + atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); + QETH_DBF_TEXT(setup, 6, "noirqpnd"); + spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); + rc = ccw_device_start(channel->ccwdev, + &channel->ccw, (addr_t) iob, 0, 0); + spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); + + if (rc) { + PRINT_ERR("Error1 in activating channel. rc=%d\n", rc); + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + atomic_set(&channel->irq_pending, 0); + wake_up(&card->wait_q); + return rc; + } + rc = wait_event_interruptible_timeout(card->wait_q, + channel->state == CH_STATE_ACTIVATING, QETH_TIMEOUT); + if (rc == -ERESTARTSYS) + return rc; + if (channel->state != CH_STATE_ACTIVATING) { + PRINT_WARN("IDX activate timed out!\n"); + QETH_DBF_TEXT_(setup, 2, "2err%d", -ETIME); + qeth_clear_cmd_buffers(channel); + return -ETIME; + } + return qeth_idx_activate_get_answer(channel, idx_reply_cb); +} + +static int qeth_peer_func_level(int level) +{ + if ((level & 0xff) == 8) + return (level & 0xff) + 0x400; + if (((level >> 8) & 3) == 1) + return (level & 0xff) + 0x200; + return level; +} + +static void qeth_idx_write_cb(struct qeth_channel *channel, + struct qeth_cmd_buffer *iob) +{ + struct qeth_card *card; + __u16 temp; + + QETH_DBF_TEXT(setup , 2, "idxwrcb"); + + if (channel->state == CH_STATE_DOWN) { + channel->state = CH_STATE_ACTIVATING; + goto out; + } + card = CARD_FROM_CDEV(channel->ccwdev); + + if (!(QETH_IS_IDX_ACT_POS_REPLY(iob->data))) { + if (QETH_IDX_ACT_CAUSE_CODE(iob->data) == 0x19) + PRINT_ERR("IDX_ACTIVATE on write channel device %s: " + "adapter exclusively used by another host\n", + CARD_WDEV_ID(card)); + else + PRINT_ERR("IDX_ACTIVATE on write channel device %s: " + "negative reply\n", CARD_WDEV_ID(card)); + goto out; + } + memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2); + if ((temp & ~0x0100) != qeth_peer_func_level(card->info.func_level)) { + PRINT_WARN("IDX_ACTIVATE on write channel device %s: " + "function level mismatch " + "(sent: 0x%x, received: 0x%x)\n", + CARD_WDEV_ID(card), card->info.func_level, temp); + goto out; + } + channel->state = CH_STATE_UP; +out: + qeth_release_buffer(channel, iob); +} + +static void qeth_idx_read_cb(struct qeth_channel *channel, + struct qeth_cmd_buffer *iob) +{ + struct qeth_card *card; + __u16 temp; + + QETH_DBF_TEXT(setup , 2, "idxrdcb"); + if (channel->state == CH_STATE_DOWN) { + channel->state = CH_STATE_ACTIVATING; + goto out; + } + + card = CARD_FROM_CDEV(channel->ccwdev); + if (qeth_check_idx_response(iob->data)) + goto out; + + if (!(QETH_IS_IDX_ACT_POS_REPLY(iob->data))) { + if (QETH_IDX_ACT_CAUSE_CODE(iob->data) == 0x19) + PRINT_ERR("IDX_ACTIVATE on read channel device %s: " + "adapter exclusively used by another host\n", + CARD_RDEV_ID(card)); + else + PRINT_ERR("IDX_ACTIVATE on read channel device %s: " + "negative reply\n", CARD_RDEV_ID(card)); + goto out; + } + +/** + * temporary fix for microcode bug + * to revert it,replace OR by AND + */ + if ((!QETH_IDX_NO_PORTNAME_REQUIRED(iob->data)) || + (card->info.type == QETH_CARD_TYPE_OSAE)) + card->info.portname_required = 1; + + memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2); + if (temp != qeth_peer_func_level(card->info.func_level)) { + PRINT_WARN("IDX_ACTIVATE on read channel device %s: function " + "level mismatch (sent: 0x%x, received: 0x%x)\n", + CARD_RDEV_ID(card), card->info.func_level, temp); + goto out; + } + memcpy(&card->token.issuer_rm_r, + QETH_IDX_ACT_ISSUER_RM_TOKEN(iob->data), + QETH_MPC_TOKEN_LENGTH); + memcpy(&card->info.mcl_level[0], + QETH_IDX_REPLY_LEVEL(iob->data), QETH_MCL_LENGTH); + channel->state = CH_STATE_UP; +out: + qeth_release_buffer(channel, iob); +} + +void qeth_prepare_control_data(struct qeth_card *card, int len, + struct qeth_cmd_buffer *iob) +{ + qeth_setup_ccw(&card->write, iob->data, len); + iob->callback = qeth_release_buffer; + + memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), + &card->seqno.trans_hdr, QETH_SEQ_NO_LENGTH); + card->seqno.trans_hdr++; + memcpy(QETH_PDU_HEADER_SEQ_NO(iob->data), + &card->seqno.pdu_hdr, QETH_SEQ_NO_LENGTH); + card->seqno.pdu_hdr++; + memcpy(QETH_PDU_HEADER_ACK_SEQ_NO(iob->data), + &card->seqno.pdu_hdr_ack, QETH_SEQ_NO_LENGTH); + QETH_DBF_HEX(control, 2, iob->data, QETH_DBF_CONTROL_LEN); +} +EXPORT_SYMBOL_GPL(qeth_prepare_control_data); + +int qeth_send_control_data(struct qeth_card *card, int len, + struct qeth_cmd_buffer *iob, + int (*reply_cb)(struct qeth_card *, struct qeth_reply *, + unsigned long), + void *reply_param) +{ + int rc; + unsigned long flags; + struct qeth_reply *reply = NULL; + unsigned long timeout; + + QETH_DBF_TEXT(trace, 2, "sendctl"); + + reply = qeth_alloc_reply(card); + if (!reply) { + PRINT_WARN("Could no alloc qeth_reply!\n"); + return -ENOMEM; + } + reply->callback = reply_cb; + reply->param = reply_param; + if (card->state == CARD_STATE_DOWN) + reply->seqno = QETH_IDX_COMMAND_SEQNO; + else + reply->seqno = card->seqno.ipa++; + init_waitqueue_head(&reply->wait_q); + spin_lock_irqsave(&card->lock, flags); + list_add_tail(&reply->list, &card->cmd_waiter_list); + spin_unlock_irqrestore(&card->lock, flags); + QETH_DBF_HEX(control, 2, iob->data, QETH_DBF_CONTROL_LEN); + + while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ; + qeth_prepare_control_data(card, len, iob); + + if (IS_IPA(iob->data)) + timeout = jiffies + QETH_IPA_TIMEOUT; + else + timeout = jiffies + QETH_TIMEOUT; + + QETH_DBF_TEXT(trace, 6, "noirqpnd"); + spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); + rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, + (addr_t) iob, 0, 0); + spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags); + if (rc) { + PRINT_WARN("qeth_send_control_data: " + "ccw_device_start rc = %i\n", rc); + QETH_DBF_TEXT_(trace, 2, " err%d", rc); + spin_lock_irqsave(&card->lock, flags); + list_del_init(&reply->list); + qeth_put_reply(reply); + spin_unlock_irqrestore(&card->lock, flags); + qeth_release_buffer(iob->channel, iob); + atomic_set(&card->write.irq_pending, 0); + wake_up(&card->wait_q); + return rc; + } + while (!atomic_read(&reply->received)) { + if (time_after(jiffies, timeout)) { + spin_lock_irqsave(&reply->card->lock, flags); + list_del_init(&reply->list); + spin_unlock_irqrestore(&reply->card->lock, flags); + reply->rc = -ETIME; + atomic_inc(&reply->received); + wake_up(&reply->wait_q); + } + cpu_relax(); + }; + rc = reply->rc; + qeth_put_reply(reply); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_send_control_data); + +static int qeth_cm_enable_cb(struct qeth_card *card, struct qeth_reply *reply, + unsigned long data) +{ + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "cmenblcb"); + + iob = (struct qeth_cmd_buffer *) data; + memcpy(&card->token.cm_filter_r, + QETH_CM_ENABLE_RESP_FILTER_TOKEN(iob->data), + QETH_MPC_TOKEN_LENGTH); + QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + return 0; +} + +static int qeth_cm_enable(struct qeth_card *card) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "cmenable"); + + iob = qeth_wait_for_buffer(&card->write); + memcpy(iob->data, CM_ENABLE, CM_ENABLE_SIZE); + memcpy(QETH_CM_ENABLE_ISSUER_RM_TOKEN(iob->data), + &card->token.issuer_rm_r, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_CM_ENABLE_FILTER_TOKEN(iob->data), + &card->token.cm_filter_w, QETH_MPC_TOKEN_LENGTH); + + rc = qeth_send_control_data(card, CM_ENABLE_SIZE, iob, + qeth_cm_enable_cb, NULL); + return rc; +} + +static int qeth_cm_setup_cb(struct qeth_card *card, struct qeth_reply *reply, + unsigned long data) +{ + + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "cmsetpcb"); + + iob = (struct qeth_cmd_buffer *) data; + memcpy(&card->token.cm_connection_r, + QETH_CM_SETUP_RESP_DEST_ADDR(iob->data), + QETH_MPC_TOKEN_LENGTH); + QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + return 0; +} + +static int qeth_cm_setup(struct qeth_card *card) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "cmsetup"); + + iob = qeth_wait_for_buffer(&card->write); + memcpy(iob->data, CM_SETUP, CM_SETUP_SIZE); + memcpy(QETH_CM_SETUP_DEST_ADDR(iob->data), + &card->token.issuer_rm_r, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_CM_SETUP_CONNECTION_TOKEN(iob->data), + &card->token.cm_connection_w, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_CM_SETUP_FILTER_TOKEN(iob->data), + &card->token.cm_filter_r, QETH_MPC_TOKEN_LENGTH); + rc = qeth_send_control_data(card, CM_SETUP_SIZE, iob, + qeth_cm_setup_cb, NULL); + return rc; + +} + +static inline int qeth_get_initial_mtu_for_card(struct qeth_card *card) +{ + switch (card->info.type) { + case QETH_CARD_TYPE_UNKNOWN: + return 1500; + case QETH_CARD_TYPE_IQD: + return card->info.max_mtu; + case QETH_CARD_TYPE_OSAE: + switch (card->info.link_type) { + case QETH_LINK_TYPE_HSTR: + case QETH_LINK_TYPE_LANE_TR: + return 2000; + default: + return 1492; + } + default: + return 1500; + } +} + +static inline int qeth_get_max_mtu_for_card(int cardtype) +{ + switch (cardtype) { + + case QETH_CARD_TYPE_UNKNOWN: + case QETH_CARD_TYPE_OSAE: + case QETH_CARD_TYPE_OSN: + return 61440; + case QETH_CARD_TYPE_IQD: + return 57344; + default: + return 1500; + } +} + +static inline int qeth_get_mtu_out_of_mpc(int cardtype) +{ + switch (cardtype) { + case QETH_CARD_TYPE_IQD: + return 1; + default: + return 0; + } +} + +static inline int qeth_get_mtu_outof_framesize(int framesize) +{ + switch (framesize) { + case 0x4000: + return 8192; + case 0x6000: + return 16384; + case 0xa000: + return 32768; + case 0xffff: + return 57344; + default: + return 0; + } +} + +static inline int qeth_mtu_is_valid(struct qeth_card *card, int mtu) +{ + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + return ((mtu >= 576) && (mtu <= 61440)); + case QETH_CARD_TYPE_IQD: + return ((mtu >= 576) && + (mtu <= card->info.max_mtu + 4096 - 32)); + case QETH_CARD_TYPE_OSN: + case QETH_CARD_TYPE_UNKNOWN: + default: + return 1; + } +} + +static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply, + unsigned long data) +{ + + __u16 mtu, framesize; + __u16 len; + __u8 link_type; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "ulpenacb"); + + iob = (struct qeth_cmd_buffer *) data; + memcpy(&card->token.ulp_filter_r, + QETH_ULP_ENABLE_RESP_FILTER_TOKEN(iob->data), + QETH_MPC_TOKEN_LENGTH); + if (qeth_get_mtu_out_of_mpc(card->info.type)) { + memcpy(&framesize, QETH_ULP_ENABLE_RESP_MAX_MTU(iob->data), 2); + mtu = qeth_get_mtu_outof_framesize(framesize); + if (!mtu) { + iob->rc = -EINVAL; + QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + return 0; + } + card->info.max_mtu = mtu; + card->info.initial_mtu = mtu; + card->qdio.in_buf_size = mtu + 2 * PAGE_SIZE; + } else { + card->info.initial_mtu = qeth_get_initial_mtu_for_card(card); + card->info.max_mtu = qeth_get_max_mtu_for_card(card->info.type); + card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT; + } + + memcpy(&len, QETH_ULP_ENABLE_RESP_DIFINFO_LEN(iob->data), 2); + if (len >= QETH_MPC_DIFINFO_LEN_INDICATES_LINK_TYPE) { + memcpy(&link_type, + QETH_ULP_ENABLE_RESP_LINK_TYPE(iob->data), 1); + card->info.link_type = link_type; + } else + card->info.link_type = 0; + QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + return 0; +} + +static int qeth_ulp_enable(struct qeth_card *card) +{ + int rc; + char prot_type; + struct qeth_cmd_buffer *iob; + + /*FIXME: trace view callbacks*/ + QETH_DBF_TEXT(setup, 2, "ulpenabl"); + + iob = qeth_wait_for_buffer(&card->write); + memcpy(iob->data, ULP_ENABLE, ULP_ENABLE_SIZE); + + *(QETH_ULP_ENABLE_LINKNUM(iob->data)) = + (__u8) card->info.portno; + if (card->options.layer2) + if (card->info.type == QETH_CARD_TYPE_OSN) + prot_type = QETH_PROT_OSN2; + else + prot_type = QETH_PROT_LAYER2; + else + prot_type = QETH_PROT_TCPIP; + + memcpy(QETH_ULP_ENABLE_PROT_TYPE(iob->data), &prot_type, 1); + memcpy(QETH_ULP_ENABLE_DEST_ADDR(iob->data), + &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_ULP_ENABLE_FILTER_TOKEN(iob->data), + &card->token.ulp_filter_w, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_ULP_ENABLE_PORTNAME_AND_LL(iob->data), + card->info.portname, 9); + rc = qeth_send_control_data(card, ULP_ENABLE_SIZE, iob, + qeth_ulp_enable_cb, NULL); + return rc; + +} + +static int qeth_ulp_setup_cb(struct qeth_card *card, struct qeth_reply *reply, + unsigned long data) +{ + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "ulpstpcb"); + + iob = (struct qeth_cmd_buffer *) data; + memcpy(&card->token.ulp_connection_r, + QETH_ULP_SETUP_RESP_CONNECTION_TOKEN(iob->data), + QETH_MPC_TOKEN_LENGTH); + QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + return 0; +} + +static int qeth_ulp_setup(struct qeth_card *card) +{ + int rc; + __u16 temp; + struct qeth_cmd_buffer *iob; + struct ccw_dev_id dev_id; + + QETH_DBF_TEXT(setup, 2, "ulpsetup"); + + iob = qeth_wait_for_buffer(&card->write); + memcpy(iob->data, ULP_SETUP, ULP_SETUP_SIZE); + + memcpy(QETH_ULP_SETUP_DEST_ADDR(iob->data), + &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_ULP_SETUP_CONNECTION_TOKEN(iob->data), + &card->token.ulp_connection_w, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_ULP_SETUP_FILTER_TOKEN(iob->data), + &card->token.ulp_filter_r, QETH_MPC_TOKEN_LENGTH); + + ccw_device_get_id(CARD_DDEV(card), &dev_id); + memcpy(QETH_ULP_SETUP_CUA(iob->data), &dev_id.devno, 2); + temp = (card->info.cula << 8) + card->info.unit_addr2; + memcpy(QETH_ULP_SETUP_REAL_DEVADDR(iob->data), &temp, 2); + rc = qeth_send_control_data(card, ULP_SETUP_SIZE, iob, + qeth_ulp_setup_cb, NULL); + return rc; +} + +static int qeth_alloc_qdio_buffers(struct qeth_card *card) +{ + int i, j; + + QETH_DBF_TEXT(setup, 2, "allcqdbf"); + + if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED, + QETH_QDIO_ALLOCATED) != QETH_QDIO_UNINITIALIZED) + return 0; + + card->qdio.in_q = kmalloc(sizeof(struct qeth_qdio_q), + GFP_KERNEL|GFP_DMA); + if (!card->qdio.in_q) + goto out_nomem; + QETH_DBF_TEXT(setup, 2, "inq"); + QETH_DBF_HEX(setup, 2, &card->qdio.in_q, sizeof(void *)); + memset(card->qdio.in_q, 0, sizeof(struct qeth_qdio_q)); + /* give inbound qeth_qdio_buffers their qdio_buffers */ + for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; ++i) + card->qdio.in_q->bufs[i].buffer = + &card->qdio.in_q->qdio_bufs[i]; + /* inbound buffer pool */ + if (qeth_alloc_buffer_pool(card)) + goto out_freeinq; + /* outbound */ + card->qdio.out_qs = + kmalloc(card->qdio.no_out_queues * + sizeof(struct qeth_qdio_out_q *), GFP_KERNEL); + if (!card->qdio.out_qs) + goto out_freepool; + for (i = 0; i < card->qdio.no_out_queues; ++i) { + card->qdio.out_qs[i] = kmalloc(sizeof(struct qeth_qdio_out_q), + GFP_KERNEL|GFP_DMA); + if (!card->qdio.out_qs[i]) + goto out_freeoutq; + QETH_DBF_TEXT_(setup, 2, "outq %i", i); + QETH_DBF_HEX(setup, 2, &card->qdio.out_qs[i], sizeof(void *)); + memset(card->qdio.out_qs[i], 0, sizeof(struct qeth_qdio_out_q)); + card->qdio.out_qs[i]->queue_no = i; + /* give outbound qeth_qdio_buffers their qdio_buffers */ + for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { + card->qdio.out_qs[i]->bufs[j].buffer = + &card->qdio.out_qs[i]->qdio_bufs[j]; + skb_queue_head_init(&card->qdio.out_qs[i]->bufs[j]. + skb_list); + lockdep_set_class( + &card->qdio.out_qs[i]->bufs[j].skb_list.lock, + &qdio_out_skb_queue_key); + INIT_LIST_HEAD(&card->qdio.out_qs[i]->bufs[j].ctx_list); + } + } + return 0; + +out_freeoutq: + while (i > 0) + kfree(card->qdio.out_qs[--i]); + kfree(card->qdio.out_qs); + card->qdio.out_qs = NULL; +out_freepool: + qeth_free_buffer_pool(card); +out_freeinq: + kfree(card->qdio.in_q); + card->qdio.in_q = NULL; +out_nomem: + atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); + return -ENOMEM; +} + +static void qeth_create_qib_param_field(struct qeth_card *card, + char *param_field) +{ + + param_field[0] = _ascebc['P']; + param_field[1] = _ascebc['C']; + param_field[2] = _ascebc['I']; + param_field[3] = _ascebc['T']; + *((unsigned int *) (¶m_field[4])) = QETH_PCI_THRESHOLD_A(card); + *((unsigned int *) (¶m_field[8])) = QETH_PCI_THRESHOLD_B(card); + *((unsigned int *) (¶m_field[12])) = QETH_PCI_TIMER_VALUE(card); +} + +static void qeth_create_qib_param_field_blkt(struct qeth_card *card, + char *param_field) +{ + param_field[16] = _ascebc['B']; + param_field[17] = _ascebc['L']; + param_field[18] = _ascebc['K']; + param_field[19] = _ascebc['T']; + *((unsigned int *) (¶m_field[20])) = card->info.blkt.time_total; + *((unsigned int *) (¶m_field[24])) = card->info.blkt.inter_packet; + *((unsigned int *) (¶m_field[28])) = + card->info.blkt.inter_packet_jumbo; +} + +static int qeth_qdio_activate(struct qeth_card *card) +{ + QETH_DBF_TEXT(setup, 3, "qdioact"); + return qdio_activate(CARD_DDEV(card), 0); +} + +static int qeth_dm_act(struct qeth_card *card) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(setup, 2, "dmact"); + + iob = qeth_wait_for_buffer(&card->write); + memcpy(iob->data, DM_ACT, DM_ACT_SIZE); + + memcpy(QETH_DM_ACT_DEST_ADDR(iob->data), + &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH); + memcpy(QETH_DM_ACT_CONNECTION_TOKEN(iob->data), + &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); + rc = qeth_send_control_data(card, DM_ACT_SIZE, iob, NULL, NULL); + return rc; +} + +static int qeth_mpc_initialize(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(setup, 2, "mpcinit"); + + rc = qeth_issue_next_read(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return rc; + } + rc = qeth_cm_enable(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + goto out_qdio; + } + rc = qeth_cm_setup(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + goto out_qdio; + } + rc = qeth_ulp_enable(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "4err%d", rc); + goto out_qdio; + } + rc = qeth_ulp_setup(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + goto out_qdio; + } + rc = qeth_alloc_qdio_buffers(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + goto out_qdio; + } + rc = qeth_qdio_establish(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "6err%d", rc); + qeth_free_qdio_buffers(card); + goto out_qdio; + } + rc = qeth_qdio_activate(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "7err%d", rc); + goto out_qdio; + } + rc = qeth_dm_act(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "8err%d", rc); + goto out_qdio; + } + + return 0; +out_qdio: + qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD); + return rc; +} + +static void qeth_print_status_with_portname(struct qeth_card *card) +{ + char dbf_text[15]; + int i; + + sprintf(dbf_text, "%s", card->info.portname + 1); + for (i = 0; i < 8; i++) + dbf_text[i] = + (char) _ebcasc[(__u8) dbf_text[i]]; + dbf_text[8] = 0; + PRINT_INFO("Device %s/%s/%s is a%s card%s%s%s\n" + "with link type %s (portname: %s)\n", + CARD_RDEV_ID(card), + CARD_WDEV_ID(card), + CARD_DDEV_ID(card), + qeth_get_cardname(card), + (card->info.mcl_level[0]) ? " (level: " : "", + (card->info.mcl_level[0]) ? card->info.mcl_level : "", + (card->info.mcl_level[0]) ? ")" : "", + qeth_get_cardname_short(card), + dbf_text); + +} + +static void qeth_print_status_no_portname(struct qeth_card *card) +{ + if (card->info.portname[0]) + PRINT_INFO("Device %s/%s/%s is a%s " + "card%s%s%s\nwith link type %s " + "(no portname needed by interface).\n", + CARD_RDEV_ID(card), + CARD_WDEV_ID(card), + CARD_DDEV_ID(card), + qeth_get_cardname(card), + (card->info.mcl_level[0]) ? " (level: " : "", + (card->info.mcl_level[0]) ? card->info.mcl_level : "", + (card->info.mcl_level[0]) ? ")" : "", + qeth_get_cardname_short(card)); + else + PRINT_INFO("Device %s/%s/%s is a%s " + "card%s%s%s\nwith link type %s.\n", + CARD_RDEV_ID(card), + CARD_WDEV_ID(card), + CARD_DDEV_ID(card), + qeth_get_cardname(card), + (card->info.mcl_level[0]) ? " (level: " : "", + (card->info.mcl_level[0]) ? card->info.mcl_level : "", + (card->info.mcl_level[0]) ? ")" : "", + qeth_get_cardname_short(card)); +} + +void qeth_print_status_message(struct qeth_card *card) +{ + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + /* VM will use a non-zero first character + * to indicate a HiperSockets like reporting + * of the level OSA sets the first character to zero + * */ + if (!card->info.mcl_level[0]) { + sprintf(card->info.mcl_level, "%02x%02x", + card->info.mcl_level[2], + card->info.mcl_level[3]); + + card->info.mcl_level[QETH_MCL_LENGTH] = 0; + break; + } + /* fallthrough */ + case QETH_CARD_TYPE_IQD: + if (card->info.guestlan) { + card->info.mcl_level[0] = (char) _ebcasc[(__u8) + card->info.mcl_level[0]]; + card->info.mcl_level[1] = (char) _ebcasc[(__u8) + card->info.mcl_level[1]]; + card->info.mcl_level[2] = (char) _ebcasc[(__u8) + card->info.mcl_level[2]]; + card->info.mcl_level[3] = (char) _ebcasc[(__u8) + card->info.mcl_level[3]]; + card->info.mcl_level[QETH_MCL_LENGTH] = 0; + } + break; + default: + memset(&card->info.mcl_level[0], 0, QETH_MCL_LENGTH + 1); + } + if (card->info.portname_required) + qeth_print_status_with_portname(card); + else + qeth_print_status_no_portname(card); +} +EXPORT_SYMBOL_GPL(qeth_print_status_message); + +void qeth_put_buffer_pool_entry(struct qeth_card *card, + struct qeth_buffer_pool_entry *entry) +{ + QETH_DBF_TEXT(trace, 6, "ptbfplen"); + list_add_tail(&entry->list, &card->qdio.in_buf_pool.entry_list); +} +EXPORT_SYMBOL_GPL(qeth_put_buffer_pool_entry); + +static void qeth_initialize_working_pool_list(struct qeth_card *card) +{ + struct qeth_buffer_pool_entry *entry; + + QETH_DBF_TEXT(trace, 5, "inwrklst"); + + list_for_each_entry(entry, + &card->qdio.init_pool.entry_list, init_list) { + qeth_put_buffer_pool_entry(card, entry); + } +} + +static inline struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry( + struct qeth_card *card) +{ + struct list_head *plh; + struct qeth_buffer_pool_entry *entry; + int i, free; + struct page *page; + + if (list_empty(&card->qdio.in_buf_pool.entry_list)) + return NULL; + + list_for_each(plh, &card->qdio.in_buf_pool.entry_list) { + entry = list_entry(plh, struct qeth_buffer_pool_entry, list); + free = 1; + for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { + if (page_count(virt_to_page(entry->elements[i])) > 1) { + free = 0; + break; + } + } + if (free) { + list_del_init(&entry->list); + return entry; + } + } + + /* no free buffer in pool so take first one and swap pages */ + entry = list_entry(card->qdio.in_buf_pool.entry_list.next, + struct qeth_buffer_pool_entry, list); + for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { + if (page_count(virt_to_page(entry->elements[i])) > 1) { + page = alloc_page(GFP_ATOMIC|GFP_DMA); + if (!page) { + return NULL; + } else { + free_page((unsigned long)entry->elements[i]); + entry->elements[i] = page_address(page); + if (card->options.performance_stats) + card->perf_stats.sg_alloc_page_rx++; + } + } + } + list_del_init(&entry->list); + return entry; +} + +static int qeth_init_input_buffer(struct qeth_card *card, + struct qeth_qdio_buffer *buf) +{ + struct qeth_buffer_pool_entry *pool_entry; + int i; + + pool_entry = qeth_find_free_buffer_pool_entry(card); + if (!pool_entry) + return 1; + + /* + * since the buffer is accessed only from the input_tasklet + * there shouldn't be a need to synchronize; also, since we use + * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off + * buffers + */ + BUG_ON(!pool_entry); + + buf->pool_entry = pool_entry; + for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { + buf->buffer->element[i].length = PAGE_SIZE; + buf->buffer->element[i].addr = pool_entry->elements[i]; + if (i == QETH_MAX_BUFFER_ELEMENTS(card) - 1) + buf->buffer->element[i].flags = SBAL_FLAGS_LAST_ENTRY; + else + buf->buffer->element[i].flags = 0; + } + return 0; +} + +int qeth_init_qdio_queues(struct qeth_card *card) +{ + int i, j; + int rc; + + QETH_DBF_TEXT(setup, 2, "initqdqs"); + + /* inbound queue */ + memset(card->qdio.in_q->qdio_bufs, 0, + QDIO_MAX_BUFFERS_PER_Q * sizeof(struct qdio_buffer)); + qeth_initialize_working_pool_list(card); + /*give only as many buffers to hardware as we have buffer pool entries*/ + for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; ++i) + qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]); + card->qdio.in_q->next_buf_to_init = + card->qdio.in_buf_pool.buf_count - 1; + rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0, + card->qdio.in_buf_pool.buf_count - 1, NULL); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return rc; + } + rc = qdio_synchronize(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + return rc; + } + /* outbound queue */ + for (i = 0; i < card->qdio.no_out_queues; ++i) { + memset(card->qdio.out_qs[i]->qdio_bufs, 0, + QDIO_MAX_BUFFERS_PER_Q * sizeof(struct qdio_buffer)); + for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { + qeth_clear_output_buffer(card->qdio.out_qs[i], + &card->qdio.out_qs[i]->bufs[j]); + } + card->qdio.out_qs[i]->card = card; + card->qdio.out_qs[i]->next_buf_to_fill = 0; + card->qdio.out_qs[i]->do_pack = 0; + atomic_set(&card->qdio.out_qs[i]->used_buffers, 0); + atomic_set(&card->qdio.out_qs[i]->set_pci_flags_count, 0); + atomic_set(&card->qdio.out_qs[i]->state, + QETH_OUT_Q_UNLOCKED); + } + return 0; +} +EXPORT_SYMBOL_GPL(qeth_init_qdio_queues); + +static inline __u8 qeth_get_ipa_adp_type(enum qeth_link_types link_type) +{ + switch (link_type) { + case QETH_LINK_TYPE_HSTR: + return 2; + default: + return 1; + } +} + +static void qeth_fill_ipacmd_header(struct qeth_card *card, + struct qeth_ipa_cmd *cmd, __u8 command, + enum qeth_prot_versions prot) +{ + memset(cmd, 0, sizeof(struct qeth_ipa_cmd)); + cmd->hdr.command = command; + cmd->hdr.initiator = IPA_CMD_INITIATOR_HOST; + cmd->hdr.seqno = card->seqno.ipa; + cmd->hdr.adapter_type = qeth_get_ipa_adp_type(card->info.link_type); + cmd->hdr.rel_adapter_no = (__u8) card->info.portno; + if (card->options.layer2) + cmd->hdr.prim_version_no = 2; + else + cmd->hdr.prim_version_no = 1; + cmd->hdr.param_count = 1; + cmd->hdr.prot_version = prot; + cmd->hdr.ipa_supported = 0; + cmd->hdr.ipa_enabled = 0; +} + +struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *card, + enum qeth_ipa_cmds ipacmd, enum qeth_prot_versions prot) +{ + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + iob = qeth_wait_for_buffer(&card->write); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + qeth_fill_ipacmd_header(card, cmd, ipacmd, prot); + + return iob; +} +EXPORT_SYMBOL_GPL(qeth_get_ipacmd_buffer); + +void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, + char prot_type) +{ + memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); + memcpy(QETH_IPA_CMD_PROT_TYPE(iob->data), &prot_type, 1); + memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), + &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); +} +EXPORT_SYMBOL_GPL(qeth_prepare_ipa_cmd); + +int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, + int (*reply_cb)(struct qeth_card *, struct qeth_reply*, + unsigned long), + void *reply_param) +{ + int rc; + char prot_type; + int cmd; + cmd = ((struct qeth_ipa_cmd *) + (iob->data+IPA_PDU_HEADER_SIZE))->hdr.command; + + QETH_DBF_TEXT(trace, 4, "sendipa"); + + if (card->options.layer2) + if (card->info.type == QETH_CARD_TYPE_OSN) + prot_type = QETH_PROT_OSN2; + else + prot_type = QETH_PROT_LAYER2; + else + prot_type = QETH_PROT_TCPIP; + qeth_prepare_ipa_cmd(card, iob, prot_type); + rc = qeth_send_control_data(card, IPA_CMD_LENGTH, iob, + reply_cb, reply_param); + if (rc != 0) { + char *ipa_cmd_name; + ipa_cmd_name = qeth_get_ipa_cmd_name(cmd); + PRINT_ERR("%s %s(%x) returned %s(%x)\n", __FUNCTION__, + ipa_cmd_name, cmd, qeth_get_ipa_msg(rc), rc); + } + return rc; +} +EXPORT_SYMBOL_GPL(qeth_send_ipa_cmd); + +static int qeth_send_startstoplan(struct qeth_card *card, + enum qeth_ipa_cmds ipacmd, enum qeth_prot_versions prot) +{ + int rc; + struct qeth_cmd_buffer *iob; + + iob = qeth_get_ipacmd_buffer(card, ipacmd, prot); + rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); + + return rc; +} + +int qeth_send_startlan(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(setup, 2, "strtlan"); + + rc = qeth_send_startstoplan(card, IPA_CMD_STARTLAN, 0); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_send_startlan); + +int qeth_send_stoplan(struct qeth_card *card) +{ + int rc = 0; + + /* + * TODO: according to the IPA format document page 14, + * TCP/IP (we!) never issue a STOPLAN + * is this right ?!? + */ + QETH_DBF_TEXT(setup, 2, "stoplan"); + + rc = qeth_send_startstoplan(card, IPA_CMD_STOPLAN, 0); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_send_stoplan); + +int qeth_default_setadapterparms_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "defadpcb"); + + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code == 0) + cmd->hdr.return_code = + cmd->data.setadapterparms.hdr.return_code; + return 0; +} +EXPORT_SYMBOL_GPL(qeth_default_setadapterparms_cb); + +static int qeth_query_setadapterparms_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 3, "quyadpcb"); + + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->data.setadapterparms.data.query_cmds_supp.lan_type & 0x7f) + card->info.link_type = + cmd->data.setadapterparms.data.query_cmds_supp.lan_type; + card->options.adp.supported_funcs = + cmd->data.setadapterparms.data.query_cmds_supp.supported_cmds; + return qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); +} + +struct qeth_cmd_buffer *qeth_get_adapter_cmd(struct qeth_card *card, + __u32 command, __u32 cmdlen) +{ + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETADAPTERPARMS, + QETH_PROT_IPV4); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setadapterparms.hdr.cmdlength = cmdlen; + cmd->data.setadapterparms.hdr.command_code = command; + cmd->data.setadapterparms.hdr.used_total = 1; + cmd->data.setadapterparms.hdr.seq_no = 1; + + return iob; +} +EXPORT_SYMBOL_GPL(qeth_get_adapter_cmd); + +int qeth_query_setadapterparms(struct qeth_card *card) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(trace, 3, "queryadp"); + iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_COMMANDS_SUPPORTED, + sizeof(struct qeth_ipacmd_setadpparms)); + rc = qeth_send_ipa_cmd(card, iob, qeth_query_setadapterparms_cb, NULL); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_query_setadapterparms); + +int qeth_check_qdio_errors(struct qdio_buffer *buf, unsigned int qdio_error, + unsigned int siga_error, const char *dbftext) +{ + if (qdio_error || siga_error) { + QETH_DBF_TEXT(trace, 2, dbftext); + QETH_DBF_TEXT(qerr, 2, dbftext); + QETH_DBF_TEXT_(qerr, 2, " F15=%02X", + buf->element[15].flags & 0xff); + QETH_DBF_TEXT_(qerr, 2, " F14=%02X", + buf->element[14].flags & 0xff); + QETH_DBF_TEXT_(qerr, 2, " qerr=%X", qdio_error); + QETH_DBF_TEXT_(qerr, 2, " serr=%X", siga_error); + return 1; + } + return 0; +} +EXPORT_SYMBOL_GPL(qeth_check_qdio_errors); + +void qeth_queue_input_buffer(struct qeth_card *card, int index) +{ + struct qeth_qdio_q *queue = card->qdio.in_q; + int count; + int i; + int rc; + int newcount = 0; + + QETH_DBF_TEXT(trace, 6, "queinbuf"); + count = (index < queue->next_buf_to_init)? + card->qdio.in_buf_pool.buf_count - + (queue->next_buf_to_init - index) : + card->qdio.in_buf_pool.buf_count - + (queue->next_buf_to_init + QDIO_MAX_BUFFERS_PER_Q - index); + /* only requeue at a certain threshold to avoid SIGAs */ + if (count >= QETH_IN_BUF_REQUEUE_THRESHOLD(card)) { + for (i = queue->next_buf_to_init; + i < queue->next_buf_to_init + count; ++i) { + if (qeth_init_input_buffer(card, + &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q])) { + break; + } else { + newcount++; + } + } + + if (newcount < count) { + /* we are in memory shortage so we switch back to + traditional skb allocation and drop packages */ + if (!atomic_read(&card->force_alloc_skb) && + net_ratelimit()) + PRINT_WARN("Switch to alloc skb\n"); + atomic_set(&card->force_alloc_skb, 3); + count = newcount; + } else { + if ((atomic_read(&card->force_alloc_skb) == 1) && + net_ratelimit()) + PRINT_WARN("Switch to sg\n"); + atomic_add_unless(&card->force_alloc_skb, -1, 0); + } + + /* + * according to old code it should be avoided to requeue all + * 128 buffers in order to benefit from PCI avoidance. + * this function keeps at least one buffer (the buffer at + * 'index') un-requeued -> this buffer is the first buffer that + * will be requeued the next time + */ + if (card->options.performance_stats) { + card->perf_stats.inbound_do_qdio_cnt++; + card->perf_stats.inbound_do_qdio_start_time = + qeth_get_micros(); + } + rc = do_QDIO(CARD_DDEV(card), + QDIO_FLAG_SYNC_INPUT | QDIO_FLAG_UNDER_INTERRUPT, + 0, queue->next_buf_to_init, count, NULL); + if (card->options.performance_stats) + card->perf_stats.inbound_do_qdio_time += + qeth_get_micros() - + card->perf_stats.inbound_do_qdio_start_time; + if (rc) { + PRINT_WARN("qeth_queue_input_buffer's do_QDIO " + "return %i (device %s).\n", + rc, CARD_DDEV_ID(card)); + QETH_DBF_TEXT(trace, 2, "qinberr"); + QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); + } + queue->next_buf_to_init = (queue->next_buf_to_init + count) % + QDIO_MAX_BUFFERS_PER_Q; + } +} +EXPORT_SYMBOL_GPL(qeth_queue_input_buffer); + +static int qeth_handle_send_error(struct qeth_card *card, + struct qeth_qdio_out_buffer *buffer, unsigned int qdio_err, + unsigned int siga_err) +{ + int sbalf15 = buffer->buffer->element[15].flags & 0xff; + int cc = siga_err & 3; + + QETH_DBF_TEXT(trace, 6, "hdsnderr"); + qeth_check_qdio_errors(buffer->buffer, qdio_err, siga_err, "qouterr"); + switch (cc) { + case 0: + if (qdio_err) { + QETH_DBF_TEXT(trace, 1, "lnkfail"); + QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(trace, 1, "%04x %02x", + (u16)qdio_err, (u8)sbalf15); + return QETH_SEND_ERROR_LINK_FAILURE; + } + return QETH_SEND_ERROR_NONE; + case 2: + if (siga_err & QDIO_SIGA_ERROR_B_BIT_SET) { + QETH_DBF_TEXT(trace, 1, "SIGAcc2B"); + QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + return QETH_SEND_ERROR_KICK_IT; + } + if ((sbalf15 >= 15) && (sbalf15 <= 31)) + return QETH_SEND_ERROR_RETRY; + return QETH_SEND_ERROR_LINK_FAILURE; + /* look at qdio_error and sbalf 15 */ + case 1: + QETH_DBF_TEXT(trace, 1, "SIGAcc1"); + QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + return QETH_SEND_ERROR_LINK_FAILURE; + case 3: + default: + QETH_DBF_TEXT(trace, 1, "SIGAcc3"); + QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + return QETH_SEND_ERROR_KICK_IT; + } +} + +/* + * Switched to packing state if the number of used buffers on a queue + * reaches a certain limit. + */ +static void qeth_switch_to_packing_if_needed(struct qeth_qdio_out_q *queue) +{ + if (!queue->do_pack) { + if (atomic_read(&queue->used_buffers) + >= QETH_HIGH_WATERMARK_PACK){ + /* switch non-PACKING -> PACKING */ + QETH_DBF_TEXT(trace, 6, "np->pack"); + if (queue->card->options.performance_stats) + queue->card->perf_stats.sc_dp_p++; + queue->do_pack = 1; + } + } +} + +/* + * Switches from packing to non-packing mode. If there is a packing + * buffer on the queue this buffer will be prepared to be flushed. + * In that case 1 is returned to inform the caller. If no buffer + * has to be flushed, zero is returned. + */ +static int qeth_switch_to_nonpacking_if_needed(struct qeth_qdio_out_q *queue) +{ + struct qeth_qdio_out_buffer *buffer; + int flush_count = 0; + + if (queue->do_pack) { + if (atomic_read(&queue->used_buffers) + <= QETH_LOW_WATERMARK_PACK) { + /* switch PACKING -> non-PACKING */ + QETH_DBF_TEXT(trace, 6, "pack->np"); + if (queue->card->options.performance_stats) + queue->card->perf_stats.sc_p_dp++; + queue->do_pack = 0; + /* flush packing buffers */ + buffer = &queue->bufs[queue->next_buf_to_fill]; + if ((atomic_read(&buffer->state) == + QETH_QDIO_BUF_EMPTY) && + (buffer->next_element_to_fill > 0)) { + atomic_set(&buffer->state, + QETH_QDIO_BUF_PRIMED); + flush_count++; + queue->next_buf_to_fill = + (queue->next_buf_to_fill + 1) % + QDIO_MAX_BUFFERS_PER_Q; + } + } + } + return flush_count; +} + +/* + * Called to flush a packing buffer if no more pci flags are on the queue. + * Checks if there is a packing buffer and prepares it to be flushed. + * In that case returns 1, otherwise zero. + */ +static int qeth_flush_buffers_on_no_pci(struct qeth_qdio_out_q *queue) +{ + struct qeth_qdio_out_buffer *buffer; + + buffer = &queue->bufs[queue->next_buf_to_fill]; + if ((atomic_read(&buffer->state) == QETH_QDIO_BUF_EMPTY) && + (buffer->next_element_to_fill > 0)) { + /* it's a packing buffer */ + atomic_set(&buffer->state, QETH_QDIO_BUF_PRIMED); + queue->next_buf_to_fill = + (queue->next_buf_to_fill + 1) % QDIO_MAX_BUFFERS_PER_Q; + return 1; + } + return 0; +} + +static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int under_int, + int index, int count) +{ + struct qeth_qdio_out_buffer *buf; + int rc; + int i; + unsigned int qdio_flags; + + QETH_DBF_TEXT(trace, 6, "flushbuf"); + + for (i = index; i < index + count; ++i) { + buf = &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q]; + buf->buffer->element[buf->next_element_to_fill - 1].flags |= + SBAL_FLAGS_LAST_ENTRY; + + if (queue->card->info.type == QETH_CARD_TYPE_IQD) + continue; + + if (!queue->do_pack) { + if ((atomic_read(&queue->used_buffers) >= + (QETH_HIGH_WATERMARK_PACK - + QETH_WATERMARK_PACK_FUZZ)) && + !atomic_read(&queue->set_pci_flags_count)) { + /* it's likely that we'll go to packing + * mode soon */ + atomic_inc(&queue->set_pci_flags_count); + buf->buffer->element[0].flags |= 0x40; + } + } else { + if (!atomic_read(&queue->set_pci_flags_count)) { + /* + * there's no outstanding PCI any more, so we + * have to request a PCI to be sure the the PCI + * will wake at some time in the future then we + * can flush packed buffers that might still be + * hanging around, which can happen if no + * further send was requested by the stack + */ + atomic_inc(&queue->set_pci_flags_count); + buf->buffer->element[0].flags |= 0x40; + } + } + } + + queue->card->dev->trans_start = jiffies; + if (queue->card->options.performance_stats) { + queue->card->perf_stats.outbound_do_qdio_cnt++; + queue->card->perf_stats.outbound_do_qdio_start_time = + qeth_get_micros(); + } + qdio_flags = QDIO_FLAG_SYNC_OUTPUT; + if (under_int) + qdio_flags |= QDIO_FLAG_UNDER_INTERRUPT; + if (atomic_read(&queue->set_pci_flags_count)) + qdio_flags |= QDIO_FLAG_PCI_OUT; + rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags, + queue->queue_no, index, count, NULL); + if (queue->card->options.performance_stats) + queue->card->perf_stats.outbound_do_qdio_time += + qeth_get_micros() - + queue->card->perf_stats.outbound_do_qdio_start_time; + if (rc) { + QETH_DBF_TEXT(trace, 2, "flushbuf"); + QETH_DBF_TEXT_(trace, 2, " err%d", rc); + QETH_DBF_TEXT_(trace, 2, "%s", CARD_DDEV_ID(queue->card)); + queue->card->stats.tx_errors += count; + /* this must not happen under normal circumstances. if it + * happens something is really wrong -> recover */ + qeth_schedule_recovery(queue->card); + return; + } + atomic_add(count, &queue->used_buffers); + if (queue->card->options.performance_stats) + queue->card->perf_stats.bufs_sent += count; +} + +static void qeth_check_outbound_queue(struct qeth_qdio_out_q *queue) +{ + int index; + int flush_cnt = 0; + int q_was_packing = 0; + + /* + * check if weed have to switch to non-packing mode or if + * we have to get a pci flag out on the queue + */ + if ((atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) || + !atomic_read(&queue->set_pci_flags_count)) { + if (atomic_xchg(&queue->state, QETH_OUT_Q_LOCKED_FLUSH) == + QETH_OUT_Q_UNLOCKED) { + /* + * If we get in here, there was no action in + * do_send_packet. So, we check if there is a + * packing buffer to be flushed here. + */ + netif_stop_queue(queue->card->dev); + index = queue->next_buf_to_fill; + q_was_packing = queue->do_pack; + /* queue->do_pack may change */ + barrier(); + flush_cnt += qeth_switch_to_nonpacking_if_needed(queue); + if (!flush_cnt && + !atomic_read(&queue->set_pci_flags_count)) + flush_cnt += + qeth_flush_buffers_on_no_pci(queue); + if (queue->card->options.performance_stats && + q_was_packing) + queue->card->perf_stats.bufs_sent_pack += + flush_cnt; + if (flush_cnt) + qeth_flush_buffers(queue, 1, index, flush_cnt); + atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); + } + } +} + +void qeth_qdio_output_handler(struct ccw_device *ccwdev, unsigned int status, + unsigned int qdio_error, unsigned int siga_error, + unsigned int __queue, int first_element, int count, + unsigned long card_ptr) +{ + struct qeth_card *card = (struct qeth_card *) card_ptr; + struct qeth_qdio_out_q *queue = card->qdio.out_qs[__queue]; + struct qeth_qdio_out_buffer *buffer; + int i; + + QETH_DBF_TEXT(trace, 6, "qdouhdl"); + if (status & QDIO_STATUS_LOOK_FOR_ERROR) { + if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION) { + QETH_DBF_TEXT(trace, 2, "achkcond"); + QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(trace, 2, "%08x", status); + netif_stop_queue(card->dev); + qeth_schedule_recovery(card); + return; + } + } + if (card->options.performance_stats) { + card->perf_stats.outbound_handler_cnt++; + card->perf_stats.outbound_handler_start_time = + qeth_get_micros(); + } + for (i = first_element; i < (first_element + count); ++i) { + buffer = &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q]; + /*we only handle the KICK_IT error by doing a recovery */ + if (qeth_handle_send_error(card, buffer, + qdio_error, siga_error) + == QETH_SEND_ERROR_KICK_IT){ + netif_stop_queue(card->dev); + qeth_schedule_recovery(card); + return; + } + qeth_clear_output_buffer(queue, buffer); + } + atomic_sub(count, &queue->used_buffers); + /* check if we need to do something on this outbound queue */ + if (card->info.type != QETH_CARD_TYPE_IQD) + qeth_check_outbound_queue(queue); + + netif_wake_queue(queue->card->dev); + if (card->options.performance_stats) + card->perf_stats.outbound_handler_time += qeth_get_micros() - + card->perf_stats.outbound_handler_start_time; +} +EXPORT_SYMBOL_GPL(qeth_qdio_output_handler); + +int qeth_get_cast_type(struct qeth_card *card, struct sk_buff *skb) +{ + int cast_type = RTN_UNSPEC; + + if (card->info.type == QETH_CARD_TYPE_OSN) + return cast_type; + + if (skb->dst && skb->dst->neighbour) { + cast_type = skb->dst->neighbour->type; + if ((cast_type == RTN_BROADCAST) || + (cast_type == RTN_MULTICAST) || + (cast_type == RTN_ANYCAST)) + return cast_type; + else + return RTN_UNSPEC; + } + /* try something else */ + if (skb->protocol == ETH_P_IPV6) + return (skb_network_header(skb)[24] == 0xff) ? + RTN_MULTICAST : 0; + else if (skb->protocol == ETH_P_IP) + return ((skb_network_header(skb)[16] & 0xf0) == 0xe0) ? + RTN_MULTICAST : 0; + /* ... */ + if (!memcmp(skb->data, skb->dev->broadcast, 6)) + return RTN_BROADCAST; + else { + u16 hdr_mac; + + hdr_mac = *((u16 *)skb->data); + /* tr multicast? */ + switch (card->info.link_type) { + case QETH_LINK_TYPE_HSTR: + case QETH_LINK_TYPE_LANE_TR: + if ((hdr_mac == QETH_TR_MAC_NC) || + (hdr_mac == QETH_TR_MAC_C)) + return RTN_MULTICAST; + break; + /* eth or so multicast? */ + default: + if ((hdr_mac == QETH_ETH_MAC_V4) || + (hdr_mac == QETH_ETH_MAC_V6)) + return RTN_MULTICAST; + } + } + return cast_type; +} +EXPORT_SYMBOL_GPL(qeth_get_cast_type); + +int qeth_get_priority_queue(struct qeth_card *card, struct sk_buff *skb, + int ipv, int cast_type) +{ + if (!ipv && (card->info.type == QETH_CARD_TYPE_OSAE)) + return card->qdio.default_out_queue; + switch (card->qdio.no_out_queues) { + case 4: + if (cast_type && card->info.is_multicast_different) + return card->info.is_multicast_different & + (card->qdio.no_out_queues - 1); + if (card->qdio.do_prio_queueing && (ipv == 4)) { + const u8 tos = ip_hdr(skb)->tos; + + if (card->qdio.do_prio_queueing == + QETH_PRIO_Q_ING_TOS) { + if (tos & IP_TOS_NOTIMPORTANT) + return 3; + if (tos & IP_TOS_HIGHRELIABILITY) + return 2; + if (tos & IP_TOS_HIGHTHROUGHPUT) + return 1; + if (tos & IP_TOS_LOWDELAY) + return 0; + } + if (card->qdio.do_prio_queueing == + QETH_PRIO_Q_ING_PREC) + return 3 - (tos >> 6); + } else if (card->qdio.do_prio_queueing && (ipv == 6)) { + /* TODO: IPv6!!! */ + } + return card->qdio.default_out_queue; + case 1: /* fallthrough for single-out-queue 1920-device */ + default: + return card->qdio.default_out_queue; + } +} +EXPORT_SYMBOL_GPL(qeth_get_priority_queue); + +static void __qeth_free_new_skb(struct sk_buff *orig_skb, + struct sk_buff *new_skb) +{ + if (orig_skb != new_skb) + dev_kfree_skb_any(new_skb); +} + +static inline struct sk_buff *qeth_realloc_headroom(struct qeth_card *card, + struct sk_buff *skb, int size) +{ + struct sk_buff *new_skb = skb; + + if (skb_headroom(skb) >= size) + return skb; + new_skb = skb_realloc_headroom(skb, size); + if (!new_skb) + PRINT_ERR("Could not realloc headroom for qeth_hdr " + "on interface %s", QETH_CARD_IFNAME(card)); + return new_skb; +} + +struct sk_buff *qeth_prepare_skb(struct qeth_card *card, struct sk_buff *skb, + struct qeth_hdr **hdr) +{ + struct sk_buff *new_skb; + + QETH_DBF_TEXT(trace, 6, "prepskb"); + + new_skb = qeth_realloc_headroom(card, skb, + sizeof(struct qeth_hdr)); + if (!new_skb) + return NULL; + + *hdr = ((struct qeth_hdr *)qeth_push_skb(card, new_skb, + sizeof(struct qeth_hdr))); + if (*hdr == NULL) { + __qeth_free_new_skb(skb, new_skb); + return NULL; + } + return new_skb; +} +EXPORT_SYMBOL_GPL(qeth_prepare_skb); + +int qeth_get_elements_no(struct qeth_card *card, void *hdr, + struct sk_buff *skb, int elems) +{ + int elements_needed = 0; + + if (skb_shinfo(skb)->nr_frags > 0) + elements_needed = (skb_shinfo(skb)->nr_frags + 1); + if (elements_needed == 0) + elements_needed = 1 + (((((unsigned long) hdr) % PAGE_SIZE) + + skb->len) >> PAGE_SHIFT); + if ((elements_needed + elems) > QETH_MAX_BUFFER_ELEMENTS(card)) { + PRINT_ERR("Invalid size of IP packet " + "(Number=%d / Length=%d). Discarded.\n", + (elements_needed+elems), skb->len); + return 0; + } + return elements_needed; +} +EXPORT_SYMBOL_GPL(qeth_get_elements_no); + +static void __qeth_fill_buffer(struct sk_buff *skb, struct qdio_buffer *buffer, + int is_tso, int *next_element_to_fill) +{ + int length = skb->len; + int length_here; + int element; + char *data; + int first_lap ; + + element = *next_element_to_fill; + data = skb->data; + first_lap = (is_tso == 0 ? 1 : 0); + + while (length > 0) { + /* length_here is the remaining amount of data in this page */ + length_here = PAGE_SIZE - ((unsigned long) data % PAGE_SIZE); + if (length < length_here) + length_here = length; + + buffer->element[element].addr = data; + buffer->element[element].length = length_here; + length -= length_here; + if (!length) { + if (first_lap) + buffer->element[element].flags = 0; + else + buffer->element[element].flags = + SBAL_FLAGS_LAST_FRAG; + } else { + if (first_lap) + buffer->element[element].flags = + SBAL_FLAGS_FIRST_FRAG; + else + buffer->element[element].flags = + SBAL_FLAGS_MIDDLE_FRAG; + } + data += length_here; + element++; + first_lap = 0; + } + *next_element_to_fill = element; +} + +static int qeth_fill_buffer(struct qeth_qdio_out_q *queue, + struct qeth_qdio_out_buffer *buf, struct sk_buff *skb) +{ + struct qdio_buffer *buffer; + struct qeth_hdr_tso *hdr; + int flush_cnt = 0, hdr_len, large_send = 0; + + QETH_DBF_TEXT(trace, 6, "qdfillbf"); + + buffer = buf->buffer; + atomic_inc(&skb->users); + skb_queue_tail(&buf->skb_list, skb); + + hdr = (struct qeth_hdr_tso *) skb->data; + /*check first on TSO ....*/ + if (hdr->hdr.hdr.l3.id == QETH_HEADER_TYPE_TSO) { + int element = buf->next_element_to_fill; + + hdr_len = sizeof(struct qeth_hdr_tso) + hdr->ext.dg_hdr_len; + /*fill first buffer entry only with header information */ + buffer->element[element].addr = skb->data; + buffer->element[element].length = hdr_len; + buffer->element[element].flags = SBAL_FLAGS_FIRST_FRAG; + buf->next_element_to_fill++; + skb->data += hdr_len; + skb->len -= hdr_len; + large_send = 1; + } + if (skb_shinfo(skb)->nr_frags == 0) + __qeth_fill_buffer(skb, buffer, large_send, + (int *)&buf->next_element_to_fill); + else + __qeth_fill_buffer_frag(skb, buffer, large_send, + (int *)&buf->next_element_to_fill); + + if (!queue->do_pack) { + QETH_DBF_TEXT(trace, 6, "fillbfnp"); + /* set state to PRIMED -> will be flushed */ + atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); + flush_cnt = 1; + } else { + QETH_DBF_TEXT(trace, 6, "fillbfpa"); + if (queue->card->options.performance_stats) + queue->card->perf_stats.skbs_sent_pack++; + if (buf->next_element_to_fill >= + QETH_MAX_BUFFER_ELEMENTS(queue->card)) { + /* + * packed buffer if full -> set state PRIMED + * -> will be flushed + */ + atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); + flush_cnt = 1; + } + } + return flush_cnt; +} + +int qeth_do_send_packet_fast(struct qeth_card *card, + struct qeth_qdio_out_q *queue, struct sk_buff *skb, + struct qeth_hdr *hdr, int elements_needed, + struct qeth_eddp_context *ctx) +{ + struct qeth_qdio_out_buffer *buffer; + int buffers_needed = 0; + int flush_cnt = 0; + int index; + + QETH_DBF_TEXT(trace, 6, "dosndpfa"); + + /* spin until we get the queue ... */ + while (atomic_cmpxchg(&queue->state, QETH_OUT_Q_UNLOCKED, + QETH_OUT_Q_LOCKED) != QETH_OUT_Q_UNLOCKED); + /* ... now we've got the queue */ + index = queue->next_buf_to_fill; + buffer = &queue->bufs[queue->next_buf_to_fill]; + /* + * check if buffer is empty to make sure that we do not 'overtake' + * ourselves and try to fill a buffer that is already primed + */ + if (atomic_read(&buffer->state) != QETH_QDIO_BUF_EMPTY) + goto out; + if (ctx == NULL) + queue->next_buf_to_fill = (queue->next_buf_to_fill + 1) % + QDIO_MAX_BUFFERS_PER_Q; + else { + buffers_needed = qeth_eddp_check_buffers_for_context(queue, + ctx); + if (buffers_needed < 0) + goto out; + queue->next_buf_to_fill = + (queue->next_buf_to_fill + buffers_needed) % + QDIO_MAX_BUFFERS_PER_Q; + } + atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); + if (ctx == NULL) { + qeth_fill_buffer(queue, buffer, skb); + qeth_flush_buffers(queue, 0, index, 1); + } else { + flush_cnt = qeth_eddp_fill_buffer(queue, ctx, index); + WARN_ON(buffers_needed != flush_cnt); + qeth_flush_buffers(queue, 0, index, flush_cnt); + } + return 0; +out: + atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); + return -EBUSY; +} +EXPORT_SYMBOL_GPL(qeth_do_send_packet_fast); + +int qeth_do_send_packet(struct qeth_card *card, struct qeth_qdio_out_q *queue, + struct sk_buff *skb, struct qeth_hdr *hdr, + int elements_needed, struct qeth_eddp_context *ctx) +{ + struct qeth_qdio_out_buffer *buffer; + int start_index; + int flush_count = 0; + int do_pack = 0; + int tmp; + int rc = 0; + + QETH_DBF_TEXT(trace, 6, "dosndpkt"); + + /* spin until we get the queue ... */ + while (atomic_cmpxchg(&queue->state, QETH_OUT_Q_UNLOCKED, + QETH_OUT_Q_LOCKED) != QETH_OUT_Q_UNLOCKED); + start_index = queue->next_buf_to_fill; + buffer = &queue->bufs[queue->next_buf_to_fill]; + /* + * check if buffer is empty to make sure that we do not 'overtake' + * ourselves and try to fill a buffer that is already primed + */ + if (atomic_read(&buffer->state) != QETH_QDIO_BUF_EMPTY) { + atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); + return -EBUSY; + } + /* check if we need to switch packing state of this queue */ + qeth_switch_to_packing_if_needed(queue); + if (queue->do_pack) { + do_pack = 1; + if (ctx == NULL) { + /* does packet fit in current buffer? */ + if ((QETH_MAX_BUFFER_ELEMENTS(card) - + buffer->next_element_to_fill) < elements_needed) { + /* ... no -> set state PRIMED */ + atomic_set(&buffer->state, + QETH_QDIO_BUF_PRIMED); + flush_count++; + queue->next_buf_to_fill = + (queue->next_buf_to_fill + 1) % + QDIO_MAX_BUFFERS_PER_Q; + buffer = &queue->bufs[queue->next_buf_to_fill]; + /* we did a step forward, so check buffer state + * again */ + if (atomic_read(&buffer->state) != + QETH_QDIO_BUF_EMPTY){ + qeth_flush_buffers(queue, 0, + start_index, flush_count); + atomic_set(&queue->state, + QETH_OUT_Q_UNLOCKED); + return -EBUSY; + } + } + } else { + /* check if we have enough elements (including following + * free buffers) to handle eddp context */ + if (qeth_eddp_check_buffers_for_context(queue, ctx) + < 0) { + if (net_ratelimit()) + PRINT_WARN("eddp tx_dropped 1\n"); + rc = -EBUSY; + goto out; + } + } + } + if (ctx == NULL) + tmp = qeth_fill_buffer(queue, buffer, skb); + else { + tmp = qeth_eddp_fill_buffer(queue, ctx, + queue->next_buf_to_fill); + if (tmp < 0) { + PRINT_ERR("eddp tx_dropped 2\n"); + rc = -EBUSY; + goto out; + } + } + queue->next_buf_to_fill = (queue->next_buf_to_fill + tmp) % + QDIO_MAX_BUFFERS_PER_Q; + flush_count += tmp; +out: + if (flush_count) + qeth_flush_buffers(queue, 0, start_index, flush_count); + else if (!atomic_read(&queue->set_pci_flags_count)) + atomic_xchg(&queue->state, QETH_OUT_Q_LOCKED_FLUSH); + /* + * queue->state will go from LOCKED -> UNLOCKED or from + * LOCKED_FLUSH -> LOCKED if output_handler wanted to 'notify' us + * (switch packing state or flush buffer to get another pci flag out). + * In that case we will enter this loop + */ + while (atomic_dec_return(&queue->state)) { + flush_count = 0; + start_index = queue->next_buf_to_fill; + /* check if we can go back to non-packing state */ + flush_count += qeth_switch_to_nonpacking_if_needed(queue); + /* + * check if we need to flush a packing buffer to get a pci + * flag out on the queue + */ + if (!flush_count && !atomic_read(&queue->set_pci_flags_count)) + flush_count += qeth_flush_buffers_on_no_pci(queue); + if (flush_count) + qeth_flush_buffers(queue, 0, start_index, flush_count); + } + /* at this point the queue is UNLOCKED again */ + if (queue->card->options.performance_stats && do_pack) + queue->card->perf_stats.bufs_sent_pack += flush_count; + + return rc; +} +EXPORT_SYMBOL_GPL(qeth_do_send_packet); + +static int qeth_setadp_promisc_mode_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + struct qeth_ipacmd_setadpparms *setparms; + + QETH_DBF_TEXT(trace, 4, "prmadpcb"); + + cmd = (struct qeth_ipa_cmd *) data; + setparms = &(cmd->data.setadapterparms); + + qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); + if (cmd->hdr.return_code) { + QETH_DBF_TEXT_(trace, 4, "prmrc%2.2x", cmd->hdr.return_code); + setparms->data.mode = SET_PROMISC_MODE_OFF; + } + card->info.promisc_mode = setparms->data.mode; + return 0; +} + +void qeth_setadp_promisc_mode(struct qeth_card *card) +{ + enum qeth_ipa_promisc_modes mode; + struct net_device *dev = card->dev; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "setprom"); + + if (((dev->flags & IFF_PROMISC) && + (card->info.promisc_mode == SET_PROMISC_MODE_ON)) || + (!(dev->flags & IFF_PROMISC) && + (card->info.promisc_mode == SET_PROMISC_MODE_OFF))) + return; + mode = SET_PROMISC_MODE_OFF; + if (dev->flags & IFF_PROMISC) + mode = SET_PROMISC_MODE_ON; + QETH_DBF_TEXT_(trace, 4, "mode:%x", mode); + + iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_PROMISC_MODE, + sizeof(struct qeth_ipacmd_setadpparms)); + cmd = (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE); + cmd->data.setadapterparms.data.mode = mode; + qeth_send_ipa_cmd(card, iob, qeth_setadp_promisc_mode_cb, NULL); +} +EXPORT_SYMBOL_GPL(qeth_setadp_promisc_mode); + +int qeth_change_mtu(struct net_device *dev, int new_mtu) +{ + struct qeth_card *card; + char dbf_text[15]; + + card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 4, "chgmtu"); + sprintf(dbf_text, "%8x", new_mtu); + QETH_DBF_TEXT(trace, 4, dbf_text); + + if (new_mtu < 64) + return -EINVAL; + if (new_mtu > 65535) + return -EINVAL; + if ((!qeth_is_supported(card, IPA_IP_FRAGMENTATION)) && + (!qeth_mtu_is_valid(card, new_mtu))) + return -EINVAL; + dev->mtu = new_mtu; + return 0; +} +EXPORT_SYMBOL_GPL(qeth_change_mtu); + +struct net_device_stats *qeth_get_stats(struct net_device *dev) +{ + struct qeth_card *card; + + card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 5, "getstat"); + + return &card->stats; +} +EXPORT_SYMBOL_GPL(qeth_get_stats); + +static int qeth_setadpparms_change_macaddr_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "chgmaccb"); + + cmd = (struct qeth_ipa_cmd *) data; + if (!card->options.layer2 || + !(card->info.mac_bits & QETH_LAYER2_MAC_READ)) { + memcpy(card->dev->dev_addr, + &cmd->data.setadapterparms.data.change_addr.addr, + OSA_ADDR_LEN); + card->info.mac_bits |= QETH_LAYER2_MAC_READ; + } + qeth_default_setadapterparms_cb(card, reply, (unsigned long) cmd); + return 0; +} + +int qeth_setadpparms_change_macaddr(struct qeth_card *card) +{ + int rc; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "chgmac"); + + iob = qeth_get_adapter_cmd(card, IPA_SETADP_ALTER_MAC_ADDRESS, + sizeof(struct qeth_ipacmd_setadpparms)); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setadapterparms.data.change_addr.cmd = CHANGE_ADDR_READ_MAC; + cmd->data.setadapterparms.data.change_addr.addr_size = OSA_ADDR_LEN; + memcpy(&cmd->data.setadapterparms.data.change_addr.addr, + card->dev->dev_addr, OSA_ADDR_LEN); + rc = qeth_send_ipa_cmd(card, iob, qeth_setadpparms_change_macaddr_cb, + NULL); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_setadpparms_change_macaddr); + +void qeth_tx_timeout(struct net_device *dev) +{ + struct qeth_card *card; + + card = netdev_priv(dev); + card->stats.tx_errors++; + qeth_schedule_recovery(card); +} +EXPORT_SYMBOL_GPL(qeth_tx_timeout); + +int qeth_mdio_read(struct net_device *dev, int phy_id, int regnum) +{ + struct qeth_card *card = netdev_priv(dev); + int rc = 0; + + switch (regnum) { + case MII_BMCR: /* Basic mode control register */ + rc = BMCR_FULLDPLX; + if ((card->info.link_type != QETH_LINK_TYPE_GBIT_ETH) && + (card->info.link_type != QETH_LINK_TYPE_OSN) && + (card->info.link_type != QETH_LINK_TYPE_10GBIT_ETH)) + rc |= BMCR_SPEED100; + break; + case MII_BMSR: /* Basic mode status register */ + rc = BMSR_ERCAP | BMSR_ANEGCOMPLETE | BMSR_LSTATUS | + BMSR_10HALF | BMSR_10FULL | BMSR_100HALF | BMSR_100FULL | + BMSR_100BASE4; + break; + case MII_PHYSID1: /* PHYS ID 1 */ + rc = (dev->dev_addr[0] << 16) | (dev->dev_addr[1] << 8) | + dev->dev_addr[2]; + rc = (rc >> 5) & 0xFFFF; + break; + case MII_PHYSID2: /* PHYS ID 2 */ + rc = (dev->dev_addr[2] << 10) & 0xFFFF; + break; + case MII_ADVERTISE: /* Advertisement control reg */ + rc = ADVERTISE_ALL; + break; + case MII_LPA: /* Link partner ability reg */ + rc = LPA_10HALF | LPA_10FULL | LPA_100HALF | LPA_100FULL | + LPA_100BASE4 | LPA_LPACK; + break; + case MII_EXPANSION: /* Expansion register */ + break; + case MII_DCOUNTER: /* disconnect counter */ + break; + case MII_FCSCOUNTER: /* false carrier counter */ + break; + case MII_NWAYTEST: /* N-way auto-neg test register */ + break; + case MII_RERRCOUNTER: /* rx error counter */ + rc = card->stats.rx_errors; + break; + case MII_SREVISION: /* silicon revision */ + break; + case MII_RESV1: /* reserved 1 */ + break; + case MII_LBRERROR: /* loopback, rx, bypass error */ + break; + case MII_PHYADDR: /* physical address */ + break; + case MII_RESV2: /* reserved 2 */ + break; + case MII_TPISTATUS: /* TPI status for 10mbps */ + break; + case MII_NCONFIG: /* network interface config */ + break; + default: + break; + } + return rc; +} +EXPORT_SYMBOL_GPL(qeth_mdio_read); + +static int qeth_send_ipa_snmp_cmd(struct qeth_card *card, + struct qeth_cmd_buffer *iob, int len, + int (*reply_cb)(struct qeth_card *, struct qeth_reply *, + unsigned long), + void *reply_param) +{ + u16 s1, s2; + + QETH_DBF_TEXT(trace, 4, "sendsnmp"); + + memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); + memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), + &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); + /* adjust PDU length fields in IPA_PDU_HEADER */ + s1 = (u32) IPA_PDU_HEADER_SIZE + len; + s2 = (u32) len; + memcpy(QETH_IPA_PDU_LEN_TOTAL(iob->data), &s1, 2); + memcpy(QETH_IPA_PDU_LEN_PDU1(iob->data), &s2, 2); + memcpy(QETH_IPA_PDU_LEN_PDU2(iob->data), &s2, 2); + memcpy(QETH_IPA_PDU_LEN_PDU3(iob->data), &s2, 2); + return qeth_send_control_data(card, IPA_PDU_HEADER_SIZE + len, iob, + reply_cb, reply_param); +} + +static int qeth_snmp_command_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long sdata) +{ + struct qeth_ipa_cmd *cmd; + struct qeth_arp_query_info *qinfo; + struct qeth_snmp_cmd *snmp; + unsigned char *data; + __u16 data_len; + + QETH_DBF_TEXT(trace, 3, "snpcmdcb"); + + cmd = (struct qeth_ipa_cmd *) sdata; + data = (unsigned char *)((char *)cmd - reply->offset); + qinfo = (struct qeth_arp_query_info *) reply->param; + snmp = &cmd->data.setadapterparms.data.snmp; + + if (cmd->hdr.return_code) { + QETH_DBF_TEXT_(trace, 4, "scer1%i", cmd->hdr.return_code); + return 0; + } + if (cmd->data.setadapterparms.hdr.return_code) { + cmd->hdr.return_code = + cmd->data.setadapterparms.hdr.return_code; + QETH_DBF_TEXT_(trace, 4, "scer2%i", cmd->hdr.return_code); + return 0; + } + data_len = *((__u16 *)QETH_IPA_PDU_LEN_PDU1(data)); + if (cmd->data.setadapterparms.hdr.seq_no == 1) + data_len -= (__u16)((char *)&snmp->data - (char *)cmd); + else + data_len -= (__u16)((char *)&snmp->request - (char *)cmd); + + /* check if there is enough room in userspace */ + if ((qinfo->udata_len - qinfo->udata_offset) < data_len) { + QETH_DBF_TEXT_(trace, 4, "scer3%i", -ENOMEM); + cmd->hdr.return_code = -ENOMEM; + return 0; + } + QETH_DBF_TEXT_(trace, 4, "snore%i", + cmd->data.setadapterparms.hdr.used_total); + QETH_DBF_TEXT_(trace, 4, "sseqn%i", + cmd->data.setadapterparms.hdr.seq_no); + /*copy entries to user buffer*/ + if (cmd->data.setadapterparms.hdr.seq_no == 1) { + memcpy(qinfo->udata + qinfo->udata_offset, + (char *)snmp, + data_len + offsetof(struct qeth_snmp_cmd, data)); + qinfo->udata_offset += offsetof(struct qeth_snmp_cmd, data); + } else { + memcpy(qinfo->udata + qinfo->udata_offset, + (char *)&snmp->request, data_len); + } + qinfo->udata_offset += data_len; + /* check if all replies received ... */ + QETH_DBF_TEXT_(trace, 4, "srtot%i", + cmd->data.setadapterparms.hdr.used_total); + QETH_DBF_TEXT_(trace, 4, "srseq%i", + cmd->data.setadapterparms.hdr.seq_no); + if (cmd->data.setadapterparms.hdr.seq_no < + cmd->data.setadapterparms.hdr.used_total) + return 1; + return 0; +} + +int qeth_snmp_command(struct qeth_card *card, char __user *udata) +{ + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + struct qeth_snmp_ureq *ureq; + int req_len; + struct qeth_arp_query_info qinfo = {0, }; + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "snmpcmd"); + + if (card->info.guestlan) + return -EOPNOTSUPP; + + if ((!qeth_adp_supported(card, IPA_SETADP_SET_SNMP_CONTROL)) && + (!card->options.layer2)) { + PRINT_WARN("SNMP Query MIBS not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + /* skip 4 bytes (data_len struct member) to get req_len */ + if (copy_from_user(&req_len, udata + sizeof(int), sizeof(int))) + return -EFAULT; + ureq = kmalloc(req_len+sizeof(struct qeth_snmp_ureq_hdr), GFP_KERNEL); + if (!ureq) { + QETH_DBF_TEXT(trace, 2, "snmpnome"); + return -ENOMEM; + } + if (copy_from_user(ureq, udata, + req_len + sizeof(struct qeth_snmp_ureq_hdr))) { + kfree(ureq); + return -EFAULT; + } + qinfo.udata_len = ureq->hdr.data_len; + qinfo.udata = kzalloc(qinfo.udata_len, GFP_KERNEL); + if (!qinfo.udata) { + kfree(ureq); + return -ENOMEM; + } + qinfo.udata_offset = sizeof(struct qeth_snmp_ureq_hdr); + + iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_SNMP_CONTROL, + QETH_SNMP_SETADP_CMDLENGTH + req_len); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + memcpy(&cmd->data.setadapterparms.data.snmp, &ureq->cmd, req_len); + rc = qeth_send_ipa_snmp_cmd(card, iob, QETH_SETADP_BASE_LEN + req_len, + qeth_snmp_command_cb, (void *)&qinfo); + if (rc) + PRINT_WARN("SNMP command failed on %s: (0x%x)\n", + QETH_CARD_IFNAME(card), rc); + else { + if (copy_to_user(udata, qinfo.udata, qinfo.udata_len)) + rc = -EFAULT; + } + + kfree(ureq); + kfree(qinfo.udata); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_snmp_command); + +static inline int qeth_get_qdio_q_format(struct qeth_card *card) +{ + switch (card->info.type) { + case QETH_CARD_TYPE_IQD: + return 2; + default: + return 0; + } +} + +static int qeth_qdio_establish(struct qeth_card *card) +{ + struct qdio_initialize init_data; + char *qib_param_field; + struct qdio_buffer **in_sbal_ptrs; + struct qdio_buffer **out_sbal_ptrs; + int i, j, k; + int rc = 0; + + QETH_DBF_TEXT(setup, 2, "qdioest"); + + qib_param_field = kzalloc(QDIO_MAX_BUFFERS_PER_Q * sizeof(char), + GFP_KERNEL); + if (!qib_param_field) + return -ENOMEM; + + qeth_create_qib_param_field(card, qib_param_field); + qeth_create_qib_param_field_blkt(card, qib_param_field); + + in_sbal_ptrs = kmalloc(QDIO_MAX_BUFFERS_PER_Q * sizeof(void *), + GFP_KERNEL); + if (!in_sbal_ptrs) { + kfree(qib_param_field); + return -ENOMEM; + } + for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; ++i) + in_sbal_ptrs[i] = (struct qdio_buffer *) + virt_to_phys(card->qdio.in_q->bufs[i].buffer); + + out_sbal_ptrs = + kmalloc(card->qdio.no_out_queues * QDIO_MAX_BUFFERS_PER_Q * + sizeof(void *), GFP_KERNEL); + if (!out_sbal_ptrs) { + kfree(in_sbal_ptrs); + kfree(qib_param_field); + return -ENOMEM; + } + for (i = 0, k = 0; i < card->qdio.no_out_queues; ++i) + for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j, ++k) { + out_sbal_ptrs[k] = (struct qdio_buffer *)virt_to_phys( + card->qdio.out_qs[i]->bufs[j].buffer); + } + + memset(&init_data, 0, sizeof(struct qdio_initialize)); + init_data.cdev = CARD_DDEV(card); + init_data.q_format = qeth_get_qdio_q_format(card); + init_data.qib_param_field_format = 0; + init_data.qib_param_field = qib_param_field; + init_data.min_input_threshold = QETH_MIN_INPUT_THRESHOLD; + init_data.max_input_threshold = QETH_MAX_INPUT_THRESHOLD; + init_data.min_output_threshold = QETH_MIN_OUTPUT_THRESHOLD; + init_data.max_output_threshold = QETH_MAX_OUTPUT_THRESHOLD; + init_data.no_input_qs = 1; + init_data.no_output_qs = card->qdio.no_out_queues; + init_data.input_handler = card->discipline.input_handler; + init_data.output_handler = card->discipline.output_handler; + init_data.int_parm = (unsigned long) card; + init_data.flags = QDIO_INBOUND_0COPY_SBALS | + QDIO_OUTBOUND_0COPY_SBALS | + QDIO_USE_OUTBOUND_PCIS; + init_data.input_sbal_addr_array = (void **) in_sbal_ptrs; + init_data.output_sbal_addr_array = (void **) out_sbal_ptrs; + + if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_ALLOCATED, + QETH_QDIO_ESTABLISHED) == QETH_QDIO_ALLOCATED) { + rc = qdio_initialize(&init_data); + if (rc) + atomic_set(&card->qdio.state, QETH_QDIO_ALLOCATED); + } + kfree(out_sbal_ptrs); + kfree(in_sbal_ptrs); + kfree(qib_param_field); + return rc; +} + +static void qeth_core_free_card(struct qeth_card *card) +{ + + QETH_DBF_TEXT(setup, 2, "freecrd"); + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + qeth_clean_channel(&card->read); + qeth_clean_channel(&card->write); + if (card->dev) + free_netdev(card->dev); + kfree(card->ip_tbd_list); + qeth_free_qdio_buffers(card); + kfree(card); +} + +static struct ccw_device_id qeth_ids[] = { + {CCW_DEVICE(0x1731, 0x01), .driver_info = QETH_CARD_TYPE_OSAE}, + {CCW_DEVICE(0x1731, 0x05), .driver_info = QETH_CARD_TYPE_IQD}, + {CCW_DEVICE(0x1731, 0x06), .driver_info = QETH_CARD_TYPE_OSN}, + {}, +}; +MODULE_DEVICE_TABLE(ccw, qeth_ids); + +static struct ccw_driver qeth_ccw_driver = { + .name = "qeth", + .ids = qeth_ids, + .probe = ccwgroup_probe_ccwdev, + .remove = ccwgroup_remove_ccwdev, +}; + +static int qeth_core_driver_group(const char *buf, struct device *root_dev, + unsigned long driver_id) +{ + const char *start, *end; + char bus_ids[3][BUS_ID_SIZE], *argv[3]; + int i; + + start = buf; + for (i = 0; i < 3; i++) { + static const char delim[] = { ',', ',', '\n' }; + int len; + + end = strchr(start, delim[i]); + if (!end) + return -EINVAL; + len = min_t(ptrdiff_t, BUS_ID_SIZE, end - start); + strncpy(bus_ids[i], start, len); + bus_ids[i][len] = '\0'; + start = end + 1; + argv[i] = bus_ids[i]; + } + + return (ccwgroup_create(root_dev, driver_id, + &qeth_ccw_driver, 3, argv)); +} + +int qeth_core_hardsetup_card(struct qeth_card *card) +{ + int retries = 3; + int mpno; + int rc; + + QETH_DBF_TEXT(setup, 2, "hrdsetup"); + atomic_set(&card->force_alloc_skb, 0); +retry: + if (retries < 3) { + PRINT_WARN("Retrying to do IDX activates.\n"); + ccw_device_set_offline(CARD_DDEV(card)); + ccw_device_set_offline(CARD_WDEV(card)); + ccw_device_set_offline(CARD_RDEV(card)); + ccw_device_set_online(CARD_RDEV(card)); + ccw_device_set_online(CARD_WDEV(card)); + ccw_device_set_online(CARD_DDEV(card)); + } + rc = qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD); + if (rc == -ERESTARTSYS) { + QETH_DBF_TEXT(setup, 2, "break1"); + return rc; + } else if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + if (--retries < 0) + goto out; + else + goto retry; + } + + rc = qeth_get_unitaddr(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + return rc; + } + + mpno = QETH_MAX_PORTNO; + if (card->info.portno > mpno) { + PRINT_ERR("Device %s does not offer port number %d \n.", + CARD_BUS_ID(card), card->info.portno); + rc = -ENODEV; + goto out; + } + qeth_init_tokens(card); + qeth_init_func_level(card); + rc = qeth_idx_activate_channel(&card->read, qeth_idx_read_cb); + if (rc == -ERESTARTSYS) { + QETH_DBF_TEXT(setup, 2, "break2"); + return rc; + } else if (rc) { + QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + if (--retries < 0) + goto out; + else + goto retry; + } + rc = qeth_idx_activate_channel(&card->write, qeth_idx_write_cb); + if (rc == -ERESTARTSYS) { + QETH_DBF_TEXT(setup, 2, "break3"); + return rc; + } else if (rc) { + QETH_DBF_TEXT_(setup, 2, "4err%d", rc); + if (--retries < 0) + goto out; + else + goto retry; + } + rc = qeth_mpc_initialize(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + goto out; + } + return 0; +out: + PRINT_ERR("Initialization in hardsetup failed! rc=%d\n", rc); + return rc; +} +EXPORT_SYMBOL_GPL(qeth_core_hardsetup_card); + +static inline int qeth_create_skb_frag(struct qdio_buffer_element *element, + struct sk_buff **pskb, int offset, int *pfrag, int data_len) +{ + struct page *page = virt_to_page(element->addr); + if (*pskb == NULL) { + /* the upper protocol layers assume that there is data in the + * skb itself. Copy a small amount (64 bytes) to make them + * happy. */ + *pskb = dev_alloc_skb(64 + ETH_HLEN); + if (!(*pskb)) + return -ENOMEM; + skb_reserve(*pskb, ETH_HLEN); + if (data_len <= 64) { + memcpy(skb_put(*pskb, data_len), element->addr + offset, + data_len); + } else { + get_page(page); + memcpy(skb_put(*pskb, 64), element->addr + offset, 64); + skb_fill_page_desc(*pskb, *pfrag, page, offset + 64, + data_len - 64); + (*pskb)->data_len += data_len - 64; + (*pskb)->len += data_len - 64; + (*pskb)->truesize += data_len - 64; + (*pfrag)++; + } + } else { + get_page(page); + skb_fill_page_desc(*pskb, *pfrag, page, offset, data_len); + (*pskb)->data_len += data_len; + (*pskb)->len += data_len; + (*pskb)->truesize += data_len; + (*pfrag)++; + } + return 0; +} + +struct sk_buff *qeth_core_get_next_skb(struct qeth_card *card, + struct qdio_buffer *buffer, + struct qdio_buffer_element **__element, int *__offset, + struct qeth_hdr **hdr) +{ + struct qdio_buffer_element *element = *__element; + int offset = *__offset; + struct sk_buff *skb = NULL; + int skb_len; + void *data_ptr; + int data_len; + int headroom = 0; + int use_rx_sg = 0; + int frag = 0; + + QETH_DBF_TEXT(trace, 6, "nextskb"); + /* qeth_hdr must not cross element boundaries */ + if (element->length < offset + sizeof(struct qeth_hdr)) { + if (qeth_is_last_sbale(element)) + return NULL; + element++; + offset = 0; + if (element->length < sizeof(struct qeth_hdr)) + return NULL; + } + *hdr = element->addr + offset; + + offset += sizeof(struct qeth_hdr); + if (card->options.layer2) { + if (card->info.type == QETH_CARD_TYPE_OSN) { + skb_len = (*hdr)->hdr.osn.pdu_length; + headroom = sizeof(struct qeth_hdr); + } else { + skb_len = (*hdr)->hdr.l2.pkt_length; + } + } else { + skb_len = (*hdr)->hdr.l3.length; + headroom = max((int)ETH_HLEN, (int)TR_HLEN); + } + + if (!skb_len) + return NULL; + + if ((skb_len >= card->options.rx_sg_cb) && + (!(card->info.type == QETH_CARD_TYPE_OSN)) && + (!atomic_read(&card->force_alloc_skb))) { + use_rx_sg = 1; + } else { + skb = dev_alloc_skb(skb_len + headroom); + if (!skb) + goto no_mem; + if (headroom) + skb_reserve(skb, headroom); + } + + data_ptr = element->addr + offset; + while (skb_len) { + data_len = min(skb_len, (int)(element->length - offset)); + if (data_len) { + if (use_rx_sg) { + if (qeth_create_skb_frag(element, &skb, offset, + &frag, data_len)) + goto no_mem; + } else { + memcpy(skb_put(skb, data_len), data_ptr, + data_len); + } + } + skb_len -= data_len; + if (skb_len) { + if (qeth_is_last_sbale(element)) { + QETH_DBF_TEXT(trace, 4, "unexeob"); + QETH_DBF_TEXT_(trace, 4, "%s", + CARD_BUS_ID(card)); + QETH_DBF_TEXT(qerr, 2, "unexeob"); + QETH_DBF_TEXT_(qerr, 2, "%s", + CARD_BUS_ID(card)); + QETH_DBF_HEX(misc, 4, buffer, sizeof(*buffer)); + dev_kfree_skb_any(skb); + card->stats.rx_errors++; + return NULL; + } + element++; + offset = 0; + data_ptr = element->addr; + } else { + offset += data_len; + } + } + *__element = element; + *__offset = offset; + if (use_rx_sg && card->options.performance_stats) { + card->perf_stats.sg_skbs_rx++; + card->perf_stats.sg_frags_rx += skb_shinfo(skb)->nr_frags; + } + return skb; +no_mem: + if (net_ratelimit()) { + PRINT_WARN("No memory for packet received on %s.\n", + QETH_CARD_IFNAME(card)); + QETH_DBF_TEXT(trace, 2, "noskbmem"); + QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); + } + card->stats.rx_dropped++; + return NULL; +} +EXPORT_SYMBOL_GPL(qeth_core_get_next_skb); + +static void qeth_unregister_dbf_views(void) +{ + if (qeth_dbf_setup) + debug_unregister(qeth_dbf_setup); + if (qeth_dbf_qerr) + debug_unregister(qeth_dbf_qerr); + if (qeth_dbf_sense) + debug_unregister(qeth_dbf_sense); + if (qeth_dbf_misc) + debug_unregister(qeth_dbf_misc); + if (qeth_dbf_data) + debug_unregister(qeth_dbf_data); + if (qeth_dbf_control) + debug_unregister(qeth_dbf_control); + if (qeth_dbf_trace) + debug_unregister(qeth_dbf_trace); +} + +static int qeth_register_dbf_views(void) +{ + qeth_dbf_setup = debug_register(QETH_DBF_SETUP_NAME, + QETH_DBF_SETUP_PAGES, + QETH_DBF_SETUP_NR_AREAS, + QETH_DBF_SETUP_LEN); + qeth_dbf_misc = debug_register(QETH_DBF_MISC_NAME, + QETH_DBF_MISC_PAGES, + QETH_DBF_MISC_NR_AREAS, + QETH_DBF_MISC_LEN); + qeth_dbf_data = debug_register(QETH_DBF_DATA_NAME, + QETH_DBF_DATA_PAGES, + QETH_DBF_DATA_NR_AREAS, + QETH_DBF_DATA_LEN); + qeth_dbf_control = debug_register(QETH_DBF_CONTROL_NAME, + QETH_DBF_CONTROL_PAGES, + QETH_DBF_CONTROL_NR_AREAS, + QETH_DBF_CONTROL_LEN); + qeth_dbf_sense = debug_register(QETH_DBF_SENSE_NAME, + QETH_DBF_SENSE_PAGES, + QETH_DBF_SENSE_NR_AREAS, + QETH_DBF_SENSE_LEN); + qeth_dbf_qerr = debug_register(QETH_DBF_QERR_NAME, + QETH_DBF_QERR_PAGES, + QETH_DBF_QERR_NR_AREAS, + QETH_DBF_QERR_LEN); + qeth_dbf_trace = debug_register(QETH_DBF_TRACE_NAME, + QETH_DBF_TRACE_PAGES, + QETH_DBF_TRACE_NR_AREAS, + QETH_DBF_TRACE_LEN); + + if ((qeth_dbf_setup == NULL) || (qeth_dbf_misc == NULL) || + (qeth_dbf_data == NULL) || (qeth_dbf_control == NULL) || + (qeth_dbf_sense == NULL) || (qeth_dbf_qerr == NULL) || + (qeth_dbf_trace == NULL)) { + qeth_unregister_dbf_views(); + return -ENOMEM; + } + debug_register_view(qeth_dbf_setup, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_setup, QETH_DBF_SETUP_LEVEL); + + debug_register_view(qeth_dbf_misc, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_misc, QETH_DBF_MISC_LEVEL); + + debug_register_view(qeth_dbf_data, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_data, QETH_DBF_DATA_LEVEL); + + debug_register_view(qeth_dbf_control, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_control, QETH_DBF_CONTROL_LEVEL); + + debug_register_view(qeth_dbf_sense, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_sense, QETH_DBF_SENSE_LEVEL); + + debug_register_view(qeth_dbf_qerr, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_qerr, QETH_DBF_QERR_LEVEL); + + debug_register_view(qeth_dbf_trace, &debug_hex_ascii_view); + debug_set_level(qeth_dbf_trace, QETH_DBF_TRACE_LEVEL); + + return 0; +} + +int qeth_core_load_discipline(struct qeth_card *card, + enum qeth_discipline_id discipline) +{ + int rc = 0; + switch (discipline) { + case QETH_DISCIPLINE_LAYER3: + card->discipline.ccwgdriver = try_then_request_module( + symbol_get(qeth_l3_ccwgroup_driver), + "qeth_l3"); + break; + case QETH_DISCIPLINE_LAYER2: + card->discipline.ccwgdriver = try_then_request_module( + symbol_get(qeth_l2_ccwgroup_driver), + "qeth_l2"); + break; + } + if (!card->discipline.ccwgdriver) { + PRINT_ERR("Support for discipline %d not present\n", + discipline); + rc = -EINVAL; + } + return rc; +} + +void qeth_core_free_discipline(struct qeth_card *card) +{ + if (card->options.layer2) + symbol_put(qeth_l2_ccwgroup_driver); + else + symbol_put(qeth_l3_ccwgroup_driver); + card->discipline.ccwgdriver = NULL; +} + +static int qeth_core_probe_device(struct ccwgroup_device *gdev) +{ + struct qeth_card *card; + struct device *dev; + int rc; + unsigned long flags; + + QETH_DBF_TEXT(setup, 2, "probedev"); + + dev = &gdev->dev; + if (!get_device(dev)) + return -ENODEV; + + QETH_DBF_TEXT_(setup, 2, "%s", gdev->dev.bus_id); + + card = qeth_alloc_card(); + if (!card) { + QETH_DBF_TEXT_(setup, 2, "1err%d", -ENOMEM); + rc = -ENOMEM; + goto err_dev; + } + card->read.ccwdev = gdev->cdev[0]; + card->write.ccwdev = gdev->cdev[1]; + card->data.ccwdev = gdev->cdev[2]; + dev_set_drvdata(&gdev->dev, card); + card->gdev = gdev; + gdev->cdev[0]->handler = qeth_irq; + gdev->cdev[1]->handler = qeth_irq; + gdev->cdev[2]->handler = qeth_irq; + + rc = qeth_determine_card_type(card); + if (rc) { + PRINT_WARN("%s: not a valid card type\n", __func__); + QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + goto err_card; + } + rc = qeth_setup_card(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + goto err_card; + } + + if (card->info.type == QETH_CARD_TYPE_OSN) { + rc = qeth_core_create_osn_attributes(dev); + if (rc) + goto err_card; + rc = qeth_core_load_discipline(card, QETH_DISCIPLINE_LAYER2); + if (rc) { + qeth_core_remove_osn_attributes(dev); + goto err_card; + } + rc = card->discipline.ccwgdriver->probe(card->gdev); + if (rc) { + qeth_core_free_discipline(card); + qeth_core_remove_osn_attributes(dev); + goto err_card; + } + } else { + rc = qeth_core_create_device_attributes(dev); + if (rc) + goto err_card; + } + + write_lock_irqsave(&qeth_core_card_list.rwlock, flags); + list_add_tail(&card->list, &qeth_core_card_list.list); + write_unlock_irqrestore(&qeth_core_card_list.rwlock, flags); + return 0; + +err_card: + qeth_core_free_card(card); +err_dev: + put_device(dev); + return rc; +} + +static void qeth_core_remove_device(struct ccwgroup_device *gdev) +{ + unsigned long flags; + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + + if (card->discipline.ccwgdriver) { + card->discipline.ccwgdriver->remove(gdev); + qeth_core_free_discipline(card); + } + + if (card->info.type == QETH_CARD_TYPE_OSN) { + qeth_core_remove_osn_attributes(&gdev->dev); + } else { + qeth_core_remove_device_attributes(&gdev->dev); + } + write_lock_irqsave(&qeth_core_card_list.rwlock, flags); + list_del(&card->list); + write_unlock_irqrestore(&qeth_core_card_list.rwlock, flags); + qeth_core_free_card(card); + dev_set_drvdata(&gdev->dev, NULL); + put_device(&gdev->dev); + return; +} + +static int qeth_core_set_online(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + int rc = 0; + int def_discipline; + + if (!card->discipline.ccwgdriver) { + if (card->info.type == QETH_CARD_TYPE_IQD) + def_discipline = QETH_DISCIPLINE_LAYER3; + else + def_discipline = QETH_DISCIPLINE_LAYER2; + rc = qeth_core_load_discipline(card, def_discipline); + if (rc) + goto err; + rc = card->discipline.ccwgdriver->probe(card->gdev); + if (rc) + goto err; + } + rc = card->discipline.ccwgdriver->set_online(gdev); +err: + return rc; +} + +static int qeth_core_set_offline(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + return card->discipline.ccwgdriver->set_offline(gdev); +} + +static void qeth_core_shutdown(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + if (card->discipline.ccwgdriver && + card->discipline.ccwgdriver->shutdown) + card->discipline.ccwgdriver->shutdown(gdev); +} + +static struct ccwgroup_driver qeth_core_ccwgroup_driver = { + .owner = THIS_MODULE, + .name = "qeth", + .driver_id = 0xD8C5E3C8, + .probe = qeth_core_probe_device, + .remove = qeth_core_remove_device, + .set_online = qeth_core_set_online, + .set_offline = qeth_core_set_offline, + .shutdown = qeth_core_shutdown, +}; + +static ssize_t +qeth_core_driver_group_store(struct device_driver *ddrv, const char *buf, + size_t count) +{ + int err; + err = qeth_core_driver_group(buf, qeth_core_root_dev, + qeth_core_ccwgroup_driver.driver_id); + if (err) + return err; + else + return count; +} + +static DRIVER_ATTR(group, 0200, NULL, qeth_core_driver_group_store); + +static struct { + const char str[ETH_GSTRING_LEN]; +} qeth_ethtool_stats_keys[] = { +/* 0 */{"rx skbs"}, + {"rx buffers"}, + {"tx skbs"}, + {"tx buffers"}, + {"tx skbs no packing"}, + {"tx buffers no packing"}, + {"tx skbs packing"}, + {"tx buffers packing"}, + {"tx sg skbs"}, + {"tx sg frags"}, +/* 10 */{"rx sg skbs"}, + {"rx sg frags"}, + {"rx sg page allocs"}, + {"tx large kbytes"}, + {"tx large count"}, + {"tx pk state ch n->p"}, + {"tx pk state ch p->n"}, + {"tx pk watermark low"}, + {"tx pk watermark high"}, + {"queue 0 buffer usage"}, +/* 20 */{"queue 1 buffer usage"}, + {"queue 2 buffer usage"}, + {"queue 3 buffer usage"}, + {"rx handler time"}, + {"rx handler count"}, + {"rx do_QDIO time"}, + {"rx do_QDIO count"}, + {"tx handler time"}, + {"tx handler count"}, + {"tx time"}, +/* 30 */{"tx count"}, + {"tx do_QDIO time"}, + {"tx do_QDIO count"}, +}; + +int qeth_core_get_stats_count(struct net_device *dev) +{ + return (sizeof(qeth_ethtool_stats_keys) / ETH_GSTRING_LEN); +} +EXPORT_SYMBOL_GPL(qeth_core_get_stats_count); + +void qeth_core_get_ethtool_stats(struct net_device *dev, + struct ethtool_stats *stats, u64 *data) +{ + struct qeth_card *card = netdev_priv(dev); + data[0] = card->stats.rx_packets - + card->perf_stats.initial_rx_packets; + data[1] = card->perf_stats.bufs_rec; + data[2] = card->stats.tx_packets - + card->perf_stats.initial_tx_packets; + data[3] = card->perf_stats.bufs_sent; + data[4] = card->stats.tx_packets - card->perf_stats.initial_tx_packets + - card->perf_stats.skbs_sent_pack; + data[5] = card->perf_stats.bufs_sent - card->perf_stats.bufs_sent_pack; + data[6] = card->perf_stats.skbs_sent_pack; + data[7] = card->perf_stats.bufs_sent_pack; + data[8] = card->perf_stats.sg_skbs_sent; + data[9] = card->perf_stats.sg_frags_sent; + data[10] = card->perf_stats.sg_skbs_rx; + data[11] = card->perf_stats.sg_frags_rx; + data[12] = card->perf_stats.sg_alloc_page_rx; + data[13] = (card->perf_stats.large_send_bytes >> 10); + data[14] = card->perf_stats.large_send_cnt; + data[15] = card->perf_stats.sc_dp_p; + data[16] = card->perf_stats.sc_p_dp; + data[17] = QETH_LOW_WATERMARK_PACK; + data[18] = QETH_HIGH_WATERMARK_PACK; + data[19] = atomic_read(&card->qdio.out_qs[0]->used_buffers); + data[20] = (card->qdio.no_out_queues > 1) ? + atomic_read(&card->qdio.out_qs[1]->used_buffers) : 0; + data[21] = (card->qdio.no_out_queues > 2) ? + atomic_read(&card->qdio.out_qs[2]->used_buffers) : 0; + data[22] = (card->qdio.no_out_queues > 3) ? + atomic_read(&card->qdio.out_qs[3]->used_buffers) : 0; + data[23] = card->perf_stats.inbound_time; + data[24] = card->perf_stats.inbound_cnt; + data[25] = card->perf_stats.inbound_do_qdio_time; + data[26] = card->perf_stats.inbound_do_qdio_cnt; + data[27] = card->perf_stats.outbound_handler_time; + data[28] = card->perf_stats.outbound_handler_cnt; + data[29] = card->perf_stats.outbound_time; + data[30] = card->perf_stats.outbound_cnt; + data[31] = card->perf_stats.outbound_do_qdio_time; + data[32] = card->perf_stats.outbound_do_qdio_cnt; +} +EXPORT_SYMBOL_GPL(qeth_core_get_ethtool_stats); + +void qeth_core_get_strings(struct net_device *dev, u32 stringset, u8 *data) +{ + switch (stringset) { + case ETH_SS_STATS: + memcpy(data, &qeth_ethtool_stats_keys, + sizeof(qeth_ethtool_stats_keys)); + break; + default: + WARN_ON(1); + break; + } +} +EXPORT_SYMBOL_GPL(qeth_core_get_strings); + +void qeth_core_get_drvinfo(struct net_device *dev, + struct ethtool_drvinfo *info) +{ + struct qeth_card *card = netdev_priv(dev); + if (card->options.layer2) + strcpy(info->driver, "qeth_l2"); + else + strcpy(info->driver, "qeth_l3"); + + strcpy(info->version, "1.0"); + strcpy(info->fw_version, card->info.mcl_level); + sprintf(info->bus_info, "%s/%s/%s", + CARD_RDEV_ID(card), + CARD_WDEV_ID(card), + CARD_DDEV_ID(card)); +} +EXPORT_SYMBOL_GPL(qeth_core_get_drvinfo); + +static int __init qeth_core_init(void) +{ + int rc; + + PRINT_INFO("loading core functions\n"); + INIT_LIST_HEAD(&qeth_core_card_list.list); + rwlock_init(&qeth_core_card_list.rwlock); + + rc = qeth_register_dbf_views(); + if (rc) + goto out_err; + rc = ccw_driver_register(&qeth_ccw_driver); + if (rc) + goto ccw_err; + rc = ccwgroup_driver_register(&qeth_core_ccwgroup_driver); + if (rc) + goto ccwgroup_err; + rc = driver_create_file(&qeth_core_ccwgroup_driver.driver, + &driver_attr_group); + if (rc) + goto driver_err; + qeth_core_root_dev = s390_root_dev_register("qeth"); + rc = IS_ERR(qeth_core_root_dev) ? PTR_ERR(qeth_core_root_dev) : 0; + if (rc) + goto register_err; + return 0; + +register_err: + driver_remove_file(&qeth_core_ccwgroup_driver.driver, + &driver_attr_group); +driver_err: + ccwgroup_driver_unregister(&qeth_core_ccwgroup_driver); +ccwgroup_err: + ccw_driver_unregister(&qeth_ccw_driver); +ccw_err: + qeth_unregister_dbf_views(); +out_err: + PRINT_ERR("Initialization failed with code %d\n", rc); + return rc; +} + +static void __exit qeth_core_exit(void) +{ + s390_root_dev_unregister(qeth_core_root_dev); + driver_remove_file(&qeth_core_ccwgroup_driver.driver, + &driver_attr_group); + ccwgroup_driver_unregister(&qeth_core_ccwgroup_driver); + ccw_driver_unregister(&qeth_ccw_driver); + qeth_unregister_dbf_views(); + PRINT_INFO("core functions removed\n"); +} + +module_init(qeth_core_init); +module_exit(qeth_core_exit); +MODULE_AUTHOR("Frank Blaschka "); +MODULE_DESCRIPTION("qeth core functions"); +MODULE_LICENSE("GPL"); diff --git a/drivers/s390/net/qeth_core_mpc.c b/drivers/s390/net/qeth_core_mpc.c new file mode 100644 index 000000000000..8653b73e5dcf --- /dev/null +++ b/drivers/s390/net/qeth_core_mpc.c @@ -0,0 +1,266 @@ +/* + * drivers/s390/net/qeth_core_mpc.c + * + * Copyright IBM Corp. 2007 + * Author(s): Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#include +#include +#include "qeth_core_mpc.h" + +unsigned char IDX_ACTIVATE_READ[] = { + 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x19, 0x01, 0x01, 0x80, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc8, 0xc1, + 0xd3, 0xd3, 0xd6, 0xd3, 0xc5, 0x40, 0x00, 0x00, + 0x00, 0x00 +}; + +unsigned char IDX_ACTIVATE_WRITE[] = { + 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x15, 0x01, 0x01, 0x80, 0x00, 0x00, 0x00, 0x00, + 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0xc8, 0xc1, + 0xd3, 0xd3, 0xd6, 0xd3, 0xc5, 0x40, 0x00, 0x00, + 0x00, 0x00 +}; + +unsigned char CM_ENABLE[] = { + 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x63, + 0x10, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, + 0x81, 0x7e, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x24, 0x00, 0x23, + 0x00, 0x00, 0x23, 0x05, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x01, 0x00, 0x00, 0x23, 0x00, 0x00, 0x00, 0x40, + 0x00, 0x0c, 0x41, 0x02, 0x00, 0x17, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x0b, 0x04, 0x01, + 0x7e, 0x04, 0x05, 0x00, 0x01, 0x01, 0x0f, + 0x00, + 0x0c, 0x04, 0x02, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff +}; + +unsigned char CM_SETUP[] = { + 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, + 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x64, + 0x10, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, + 0x81, 0x7e, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x24, 0x00, 0x24, + 0x00, 0x00, 0x24, 0x05, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x01, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x40, + 0x00, 0x0c, 0x41, 0x04, 0x00, 0x18, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x09, 0x04, 0x04, + 0x05, 0x00, 0x01, 0x01, 0x11, + 0x00, 0x09, 0x04, + 0x05, 0x05, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x06, + 0x04, 0x06, 0xc8, 0x00 +}; + +unsigned char ULP_ENABLE[] = { + 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, + 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x6b, + 0x10, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, + 0x41, 0x7e, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x24, 0x00, 0x2b, + 0x00, 0x00, 0x2b, 0x05, 0x20, 0x01, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x01, 0x00, 0x00, 0x2b, 0x00, 0x00, 0x00, 0x40, + 0x00, 0x0c, 0x41, 0x02, 0x00, 0x1f, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x0b, 0x04, 0x01, + 0x03, 0x04, 0x05, 0x00, 0x01, 0x01, 0x12, + 0x00, + 0x14, 0x04, 0x0a, 0x00, 0x20, 0x00, 0x00, 0xff, + 0xff, 0x00, 0x08, 0xc8, 0xe8, 0xc4, 0xf1, 0xc7, + 0xf1, 0x00, 0x00 +}; + +unsigned char ULP_SETUP[] = { + 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, + 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x6c, + 0x10, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, + 0x41, 0x7e, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x24, 0x00, 0x2c, + 0x00, 0x00, 0x2c, 0x05, 0x20, 0x01, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x01, 0x00, 0x00, 0x2c, 0x00, 0x00, 0x00, 0x40, + 0x00, 0x0c, 0x41, 0x04, 0x00, 0x20, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x09, 0x04, 0x04, + 0x05, 0x00, 0x01, 0x01, 0x14, + 0x00, 0x09, 0x04, + 0x05, 0x05, 0x30, 0x01, 0x00, 0x00, + 0x00, 0x06, + 0x04, 0x06, 0x40, 0x00, + 0x00, 0x08, 0x04, 0x0b, + 0x00, 0x00, 0x00, 0x00 +}; + +unsigned char DM_ACT[] = { + 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, + 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x55, + 0x10, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, + 0x41, 0x7e, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, + 0x00, 0x00, 0x00, 0x02, 0x00, 0x24, 0x00, 0x15, + 0x00, 0x00, 0x2c, 0x05, 0x20, 0x01, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x01, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x40, + 0x00, 0x0c, 0x43, 0x60, 0x00, 0x09, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x09, 0x04, 0x04, + 0x05, 0x40, 0x01, 0x01, 0x00 +}; + +unsigned char IPA_PDU_HEADER[] = { + 0x00, 0xe0, 0x00, 0x00, 0x77, 0x77, 0x77, 0x77, + 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, + (IPA_PDU_HEADER_SIZE+sizeof(struct qeth_ipa_cmd)) / 256, + (IPA_PDU_HEADER_SIZE+sizeof(struct qeth_ipa_cmd)) % 256, + 0x10, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, + 0xc1, 0x03, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x24, + sizeof(struct qeth_ipa_cmd) / 256, + sizeof(struct qeth_ipa_cmd) % 256, + 0x00, + sizeof(struct qeth_ipa_cmd) / 256, + sizeof(struct qeth_ipa_cmd) % 256, + 0x05, + 0x77, 0x77, 0x77, 0x77, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x01, 0x00, + sizeof(struct qeth_ipa_cmd) / 256, + sizeof(struct qeth_ipa_cmd) % 256, + 0x00, 0x00, 0x00, 0x40, +}; +EXPORT_SYMBOL_GPL(IPA_PDU_HEADER); + +unsigned char WRITE_CCW[] = { + 0x01, CCW_FLAG_SLI, 0, 0, + 0, 0, 0, 0 +}; + +unsigned char READ_CCW[] = { + 0x02, CCW_FLAG_SLI, 0, 0, + 0, 0, 0, 0 +}; + + +struct ipa_rc_msg { + enum qeth_ipa_return_codes rc; + char *msg; +}; + +static struct ipa_rc_msg qeth_ipa_rc_msg[] = { + {IPA_RC_SUCCESS, "success"}, + {IPA_RC_NOTSUPP, "Command not supported"}, + {IPA_RC_IP_TABLE_FULL, "Add Addr IP Table Full - ipv6"}, + {IPA_RC_UNKNOWN_ERROR, "IPA command failed - reason unknown"}, + {IPA_RC_UNSUPPORTED_COMMAND, "Command not supported"}, + {IPA_RC_DUP_IPV6_REMOTE, "ipv6 address already registered remote"}, + {IPA_RC_DUP_IPV6_HOME, "ipv6 address already registered"}, + {IPA_RC_UNREGISTERED_ADDR, "Address not registered"}, + {IPA_RC_NO_ID_AVAILABLE, "No identifiers available"}, + {IPA_RC_ID_NOT_FOUND, "Identifier not found"}, + {IPA_RC_INVALID_IP_VERSION, "IP version incorrect"}, + {IPA_RC_LAN_FRAME_MISMATCH, "LAN and frame mismatch"}, + {IPA_RC_L2_UNSUPPORTED_CMD, "Unsupported layer 2 command"}, + {IPA_RC_L2_DUP_MAC, "Duplicate MAC address"}, + {IPA_RC_L2_ADDR_TABLE_FULL, "Layer2 address table full"}, + {IPA_RC_L2_DUP_LAYER3_MAC, "Duplicate with layer 3 MAC"}, + {IPA_RC_L2_GMAC_NOT_FOUND, "GMAC not found"}, + {IPA_RC_L2_MAC_NOT_FOUND, "L2 mac address not found"}, + {IPA_RC_L2_INVALID_VLAN_ID, "L2 invalid vlan id"}, + {IPA_RC_L2_DUP_VLAN_ID, "L2 duplicate vlan id"}, + {IPA_RC_L2_VLAN_ID_NOT_FOUND, "L2 vlan id not found"}, + {IPA_RC_DATA_MISMATCH, "Data field mismatch (v4/v6 mixed)"}, + {IPA_RC_INVALID_MTU_SIZE, "Invalid MTU size"}, + {IPA_RC_INVALID_LANTYPE, "Invalid LAN type"}, + {IPA_RC_INVALID_LANNUM, "Invalid LAN num"}, + {IPA_RC_DUPLICATE_IP_ADDRESS, "Address already registered"}, + {IPA_RC_IP_ADDR_TABLE_FULL, "IP address table full"}, + {IPA_RC_LAN_PORT_STATE_ERROR, "LAN port state error"}, + {IPA_RC_SETIP_NO_STARTLAN, "Setip no startlan received"}, + {IPA_RC_SETIP_ALREADY_RECEIVED, "Setip already received"}, + {IPA_RC_IP_ADDR_ALREADY_USED, "IP address already in use on LAN"}, + {IPA_RC_MULTICAST_FULL, "No task available, multicast full"}, + {IPA_RC_SETIP_INVALID_VERSION, "SETIP invalid IP version"}, + {IPA_RC_UNSUPPORTED_SUBCMD, "Unsupported assist subcommand"}, + {IPA_RC_ARP_ASSIST_NO_ENABLE, "Only partial success, no enable"}, + {IPA_RC_PRIMARY_ALREADY_DEFINED, "Primary already defined"}, + {IPA_RC_SECOND_ALREADY_DEFINED, "Secondary already defined"}, + {IPA_RC_INVALID_SETRTG_INDICATOR, "Invalid SETRTG indicator"}, + {IPA_RC_MC_ADDR_ALREADY_DEFINED, "Multicast address already defined"}, + {IPA_RC_LAN_OFFLINE, "STRTLAN_LAN_DISABLED - LAN offline"}, + {IPA_RC_INVALID_IP_VERSION2, "Invalid IP version"}, + {IPA_RC_FFFF, "Unknown Error"} +}; + + + +char *qeth_get_ipa_msg(enum qeth_ipa_return_codes rc) +{ + int x = 0; + qeth_ipa_rc_msg[sizeof(qeth_ipa_rc_msg) / + sizeof(struct ipa_rc_msg) - 1].rc = rc; + while (qeth_ipa_rc_msg[x].rc != rc) + x++; + return qeth_ipa_rc_msg[x].msg; +} + + +struct ipa_cmd_names { + enum qeth_ipa_cmds cmd; + char *name; +}; + +static struct ipa_cmd_names qeth_ipa_cmd_names[] = { + {IPA_CMD_STARTLAN, "startlan"}, + {IPA_CMD_STOPLAN, "stoplan"}, + {IPA_CMD_SETVMAC, "setvmac"}, + {IPA_CMD_DELVMAC, "delvmca"}, + {IPA_CMD_SETGMAC, "setgmac"}, + {IPA_CMD_DELGMAC, "delgmac"}, + {IPA_CMD_SETVLAN, "setvlan"}, + {IPA_CMD_DELVLAN, "delvlan"}, + {IPA_CMD_SETCCID, "setccid"}, + {IPA_CMD_DELCCID, "delccid"}, + {IPA_CMD_MODCCID, "modccid"}, + {IPA_CMD_SETIP, "setip"}, + {IPA_CMD_QIPASSIST, "qipassist"}, + {IPA_CMD_SETASSPARMS, "setassparms"}, + {IPA_CMD_SETIPM, "setipm"}, + {IPA_CMD_DELIPM, "delipm"}, + {IPA_CMD_SETRTG, "setrtg"}, + {IPA_CMD_DELIP, "delip"}, + {IPA_CMD_SETADAPTERPARMS, "setadapterparms"}, + {IPA_CMD_SET_DIAG_ASS, "set_diag_ass"}, + {IPA_CMD_CREATE_ADDR, "create_addr"}, + {IPA_CMD_DESTROY_ADDR, "destroy_addr"}, + {IPA_CMD_REGISTER_LOCAL_ADDR, "register_local_addr"}, + {IPA_CMD_UNREGISTER_LOCAL_ADDR, "unregister_local_addr"}, + {IPA_CMD_UNKNOWN, "unknown"}, +}; + +char *qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd) +{ + int x = 0; + qeth_ipa_cmd_names[ + sizeof(qeth_ipa_cmd_names) / + sizeof(struct ipa_cmd_names)-1].cmd = cmd; + while (qeth_ipa_cmd_names[x].cmd != cmd) + x++; + return qeth_ipa_cmd_names[x].name; +} diff --git a/drivers/s390/net/qeth_core_mpc.h b/drivers/s390/net/qeth_core_mpc.h new file mode 100644 index 000000000000..de221932f30f --- /dev/null +++ b/drivers/s390/net/qeth_core_mpc.h @@ -0,0 +1,566 @@ +/* + * drivers/s390/net/qeth_core_mpc.h + * + * Copyright IBM Corp. 2007 + * Author(s): Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#ifndef __QETH_CORE_MPC_H__ +#define __QETH_CORE_MPC_H__ + +#include + +#define IPA_PDU_HEADER_SIZE 0x40 +#define QETH_IPA_PDU_LEN_TOTAL(buffer) (buffer + 0x0e) +#define QETH_IPA_PDU_LEN_PDU1(buffer) (buffer + 0x26) +#define QETH_IPA_PDU_LEN_PDU2(buffer) (buffer + 0x29) +#define QETH_IPA_PDU_LEN_PDU3(buffer) (buffer + 0x3a) + +extern unsigned char IPA_PDU_HEADER[]; +#define QETH_IPA_CMD_DEST_ADDR(buffer) (buffer + 0x2c) + +#define IPA_CMD_LENGTH (IPA_PDU_HEADER_SIZE + sizeof(struct qeth_ipa_cmd)) + +#define QETH_SEQ_NO_LENGTH 4 +#define QETH_MPC_TOKEN_LENGTH 4 +#define QETH_MCL_LENGTH 4 +#define OSA_ADDR_LEN 6 + +#define QETH_TIMEOUT (10 * HZ) +#define QETH_IPA_TIMEOUT (45 * HZ) +#define QETH_IDX_COMMAND_SEQNO 0xffff0000 +#define SR_INFO_LEN 16 + +#define QETH_CLEAR_CHANNEL_PARM -10 +#define QETH_HALT_CHANNEL_PARM -11 +#define QETH_RCD_PARM -12 + +/*****************************************************************************/ +/* IP Assist related definitions */ +/*****************************************************************************/ +#define IPA_CMD_INITIATOR_HOST 0x00 +#define IPA_CMD_INITIATOR_OSA 0x01 +#define IPA_CMD_INITIATOR_HOST_REPLY 0x80 +#define IPA_CMD_INITIATOR_OSA_REPLY 0x81 +#define IPA_CMD_PRIM_VERSION_NO 0x01 + +enum qeth_card_types { + QETH_CARD_TYPE_UNKNOWN = 0, + QETH_CARD_TYPE_OSAE = 10, + QETH_CARD_TYPE_IQD = 1234, + QETH_CARD_TYPE_OSN = 11, +}; + +#define QETH_MPC_DIFINFO_LEN_INDICATES_LINK_TYPE 0x18 +/* only the first two bytes are looked at in qeth_get_cardname_short */ +enum qeth_link_types { + QETH_LINK_TYPE_FAST_ETH = 0x01, + QETH_LINK_TYPE_HSTR = 0x02, + QETH_LINK_TYPE_GBIT_ETH = 0x03, + QETH_LINK_TYPE_OSN = 0x04, + QETH_LINK_TYPE_10GBIT_ETH = 0x10, + QETH_LINK_TYPE_LANE_ETH100 = 0x81, + QETH_LINK_TYPE_LANE_TR = 0x82, + QETH_LINK_TYPE_LANE_ETH1000 = 0x83, + QETH_LINK_TYPE_LANE = 0x88, + QETH_LINK_TYPE_ATM_NATIVE = 0x90, +}; + +enum qeth_tr_macaddr_modes { + QETH_TR_MACADDR_NONCANONICAL = 0, + QETH_TR_MACADDR_CANONICAL = 1, +}; + +enum qeth_tr_broadcast_modes { + QETH_TR_BROADCAST_ALLRINGS = 0, + QETH_TR_BROADCAST_LOCAL = 1, +}; + +/* these values match CHECKSUM_* in include/linux/skbuff.h */ +enum qeth_checksum_types { + SW_CHECKSUMMING = 0, /* TODO: set to bit flag used in IPA Command */ + HW_CHECKSUMMING = 1, + NO_CHECKSUMMING = 2, +}; +#define QETH_CHECKSUM_DEFAULT SW_CHECKSUMMING + +/* + * Routing stuff + */ +#define RESET_ROUTING_FLAG 0x10 /* indicate that routing type shall be set */ +enum qeth_routing_types { + /* TODO: set to bit flag used in IPA Command */ + NO_ROUTER = 0, + PRIMARY_ROUTER = 1, + SECONDARY_ROUTER = 2, + MULTICAST_ROUTER = 3, + PRIMARY_CONNECTOR = 4, + SECONDARY_CONNECTOR = 5, +}; + +/* IPA Commands */ +enum qeth_ipa_cmds { + IPA_CMD_STARTLAN = 0x01, + IPA_CMD_STOPLAN = 0x02, + IPA_CMD_SETVMAC = 0x21, + IPA_CMD_DELVMAC = 0x22, + IPA_CMD_SETGMAC = 0x23, + IPA_CMD_DELGMAC = 0x24, + IPA_CMD_SETVLAN = 0x25, + IPA_CMD_DELVLAN = 0x26, + IPA_CMD_SETCCID = 0x41, + IPA_CMD_DELCCID = 0x42, + IPA_CMD_MODCCID = 0x43, + IPA_CMD_SETIP = 0xb1, + IPA_CMD_QIPASSIST = 0xb2, + IPA_CMD_SETASSPARMS = 0xb3, + IPA_CMD_SETIPM = 0xb4, + IPA_CMD_DELIPM = 0xb5, + IPA_CMD_SETRTG = 0xb6, + IPA_CMD_DELIP = 0xb7, + IPA_CMD_SETADAPTERPARMS = 0xb8, + IPA_CMD_SET_DIAG_ASS = 0xb9, + IPA_CMD_CREATE_ADDR = 0xc3, + IPA_CMD_DESTROY_ADDR = 0xc4, + IPA_CMD_REGISTER_LOCAL_ADDR = 0xd1, + IPA_CMD_UNREGISTER_LOCAL_ADDR = 0xd2, + IPA_CMD_UNKNOWN = 0x00 +}; + +enum qeth_ip_ass_cmds { + IPA_CMD_ASS_START = 0x0001, + IPA_CMD_ASS_STOP = 0x0002, + IPA_CMD_ASS_CONFIGURE = 0x0003, + IPA_CMD_ASS_ENABLE = 0x0004, +}; + +enum qeth_arp_process_subcmds { + IPA_CMD_ASS_ARP_SET_NO_ENTRIES = 0x0003, + IPA_CMD_ASS_ARP_QUERY_CACHE = 0x0004, + IPA_CMD_ASS_ARP_ADD_ENTRY = 0x0005, + IPA_CMD_ASS_ARP_REMOVE_ENTRY = 0x0006, + IPA_CMD_ASS_ARP_FLUSH_CACHE = 0x0007, + IPA_CMD_ASS_ARP_QUERY_INFO = 0x0104, + IPA_CMD_ASS_ARP_QUERY_STATS = 0x0204, +}; + + +/* Return Codes for IPA Commands + * according to OSA card Specs */ + +enum qeth_ipa_return_codes { + IPA_RC_SUCCESS = 0x0000, + IPA_RC_NOTSUPP = 0x0001, + IPA_RC_IP_TABLE_FULL = 0x0002, + IPA_RC_UNKNOWN_ERROR = 0x0003, + IPA_RC_UNSUPPORTED_COMMAND = 0x0004, + IPA_RC_DUP_IPV6_REMOTE = 0x0008, + IPA_RC_DUP_IPV6_HOME = 0x0010, + IPA_RC_UNREGISTERED_ADDR = 0x0011, + IPA_RC_NO_ID_AVAILABLE = 0x0012, + IPA_RC_ID_NOT_FOUND = 0x0013, + IPA_RC_INVALID_IP_VERSION = 0x0020, + IPA_RC_LAN_FRAME_MISMATCH = 0x0040, + IPA_RC_L2_UNSUPPORTED_CMD = 0x2003, + IPA_RC_L2_DUP_MAC = 0x2005, + IPA_RC_L2_ADDR_TABLE_FULL = 0x2006, + IPA_RC_L2_DUP_LAYER3_MAC = 0x200a, + IPA_RC_L2_GMAC_NOT_FOUND = 0x200b, + IPA_RC_L2_MAC_NOT_FOUND = 0x2010, + IPA_RC_L2_INVALID_VLAN_ID = 0x2015, + IPA_RC_L2_DUP_VLAN_ID = 0x2016, + IPA_RC_L2_VLAN_ID_NOT_FOUND = 0x2017, + IPA_RC_DATA_MISMATCH = 0xe001, + IPA_RC_INVALID_MTU_SIZE = 0xe002, + IPA_RC_INVALID_LANTYPE = 0xe003, + IPA_RC_INVALID_LANNUM = 0xe004, + IPA_RC_DUPLICATE_IP_ADDRESS = 0xe005, + IPA_RC_IP_ADDR_TABLE_FULL = 0xe006, + IPA_RC_LAN_PORT_STATE_ERROR = 0xe007, + IPA_RC_SETIP_NO_STARTLAN = 0xe008, + IPA_RC_SETIP_ALREADY_RECEIVED = 0xe009, + IPA_RC_IP_ADDR_ALREADY_USED = 0xe00a, + IPA_RC_MULTICAST_FULL = 0xe00b, + IPA_RC_SETIP_INVALID_VERSION = 0xe00d, + IPA_RC_UNSUPPORTED_SUBCMD = 0xe00e, + IPA_RC_ARP_ASSIST_NO_ENABLE = 0xe00f, + IPA_RC_PRIMARY_ALREADY_DEFINED = 0xe010, + IPA_RC_SECOND_ALREADY_DEFINED = 0xe011, + IPA_RC_INVALID_SETRTG_INDICATOR = 0xe012, + IPA_RC_MC_ADDR_ALREADY_DEFINED = 0xe013, + IPA_RC_LAN_OFFLINE = 0xe080, + IPA_RC_INVALID_IP_VERSION2 = 0xf001, + IPA_RC_FFFF = 0xffff +}; + +/* IPA function flags; each flag marks availability of respective function */ +enum qeth_ipa_funcs { + IPA_ARP_PROCESSING = 0x00000001L, + IPA_INBOUND_CHECKSUM = 0x00000002L, + IPA_OUTBOUND_CHECKSUM = 0x00000004L, + IPA_IP_FRAGMENTATION = 0x00000008L, + IPA_FILTERING = 0x00000010L, + IPA_IPV6 = 0x00000020L, + IPA_MULTICASTING = 0x00000040L, + IPA_IP_REASSEMBLY = 0x00000080L, + IPA_QUERY_ARP_COUNTERS = 0x00000100L, + IPA_QUERY_ARP_ADDR_INFO = 0x00000200L, + IPA_SETADAPTERPARMS = 0x00000400L, + IPA_VLAN_PRIO = 0x00000800L, + IPA_PASSTHRU = 0x00001000L, + IPA_FLUSH_ARP_SUPPORT = 0x00002000L, + IPA_FULL_VLAN = 0x00004000L, + IPA_INBOUND_PASSTHRU = 0x00008000L, + IPA_SOURCE_MAC = 0x00010000L, + IPA_OSA_MC_ROUTER = 0x00020000L, + IPA_QUERY_ARP_ASSIST = 0x00040000L, + IPA_INBOUND_TSO = 0x00080000L, + IPA_OUTBOUND_TSO = 0x00100000L, +}; + +/* SETIP/DELIP IPA Command: ***************************************************/ +enum qeth_ipa_setdelip_flags { + QETH_IPA_SETDELIP_DEFAULT = 0x00L, /* default */ + QETH_IPA_SETIP_VIPA_FLAG = 0x01L, /* no grat. ARP */ + QETH_IPA_SETIP_TAKEOVER_FLAG = 0x02L, /* nofail on grat. ARP */ + QETH_IPA_DELIP_ADDR_2_B_TAKEN_OVER = 0x20L, + QETH_IPA_DELIP_VIPA_FLAG = 0x40L, + QETH_IPA_DELIP_ADDR_NEEDS_SETIP = 0x80L, +}; + +/* SETADAPTER IPA Command: ****************************************************/ +enum qeth_ipa_setadp_cmd { + IPA_SETADP_QUERY_COMMANDS_SUPPORTED = 0x0001, + IPA_SETADP_ALTER_MAC_ADDRESS = 0x0002, + IPA_SETADP_ADD_DELETE_GROUP_ADDRESS = 0x0004, + IPA_SETADP_ADD_DELETE_FUNCTIONAL_ADDR = 0x0008, + IPA_SETADP_SET_ADDRESSING_MODE = 0x0010, + IPA_SETADP_SET_CONFIG_PARMS = 0x0020, + IPA_SETADP_SET_CONFIG_PARMS_EXTENDED = 0x0040, + IPA_SETADP_SET_BROADCAST_MODE = 0x0080, + IPA_SETADP_SEND_OSA_MESSAGE = 0x0100, + IPA_SETADP_SET_SNMP_CONTROL = 0x0200, + IPA_SETADP_QUERY_CARD_INFO = 0x0400, + IPA_SETADP_SET_PROMISC_MODE = 0x0800, +}; +enum qeth_ipa_mac_ops { + CHANGE_ADDR_READ_MAC = 0, + CHANGE_ADDR_REPLACE_MAC = 1, + CHANGE_ADDR_ADD_MAC = 2, + CHANGE_ADDR_DEL_MAC = 4, + CHANGE_ADDR_RESET_MAC = 8, +}; +enum qeth_ipa_addr_ops { + CHANGE_ADDR_READ_ADDR = 0, + CHANGE_ADDR_ADD_ADDR = 1, + CHANGE_ADDR_DEL_ADDR = 2, + CHANGE_ADDR_FLUSH_ADDR_TABLE = 4, +}; +enum qeth_ipa_promisc_modes { + SET_PROMISC_MODE_OFF = 0, + SET_PROMISC_MODE_ON = 1, +}; + +/* (SET)DELIP(M) IPA stuff ***************************************************/ +struct qeth_ipacmd_setdelip4 { + __u8 ip_addr[4]; + __u8 mask[4]; + __u32 flags; +} __attribute__ ((packed)); + +struct qeth_ipacmd_setdelip6 { + __u8 ip_addr[16]; + __u8 mask[16]; + __u32 flags; +} __attribute__ ((packed)); + +struct qeth_ipacmd_setdelipm { + __u8 mac[6]; + __u8 padding[2]; + __u8 ip6[12]; + __u8 ip4[4]; +} __attribute__ ((packed)); + +struct qeth_ipacmd_layer2setdelmac { + __u32 mac_length; + __u8 mac[6]; +} __attribute__ ((packed)); + +struct qeth_ipacmd_layer2setdelvlan { + __u16 vlan_id; +} __attribute__ ((packed)); + + +struct qeth_ipacmd_setassparms_hdr { + __u32 assist_no; + __u16 length; + __u16 command_code; + __u16 return_code; + __u8 number_of_replies; + __u8 seq_no; +} __attribute__((packed)); + +struct qeth_arp_query_data { + __u16 request_bits; + __u16 reply_bits; + __u32 no_entries; + char data; +} __attribute__((packed)); + +/* used as parameter for arp_query reply */ +struct qeth_arp_query_info { + __u32 udata_len; + __u16 mask_bits; + __u32 udata_offset; + __u32 no_entries; + char *udata; +}; + +/* SETASSPARMS IPA Command: */ +struct qeth_ipacmd_setassparms { + struct qeth_ipacmd_setassparms_hdr hdr; + union { + __u32 flags_32bit; + struct qeth_arp_cache_entry add_arp_entry; + struct qeth_arp_query_data query_arp; + __u8 ip[16]; + } data; +} __attribute__ ((packed)); + + +/* SETRTG IPA Command: ****************************************************/ +struct qeth_set_routing { + __u8 type; +}; + +/* SETADAPTERPARMS IPA Command: *******************************************/ +struct qeth_query_cmds_supp { + __u32 no_lantypes_supp; + __u8 lan_type; + __u8 reserved1[3]; + __u32 supported_cmds; + __u8 reserved2[8]; +} __attribute__ ((packed)); + +struct qeth_change_addr { + __u32 cmd; + __u32 addr_size; + __u32 no_macs; + __u8 addr[OSA_ADDR_LEN]; +} __attribute__ ((packed)); + + +struct qeth_snmp_cmd { + __u8 token[16]; + __u32 request; + __u32 interface; + __u32 returncode; + __u32 firmwarelevel; + __u32 seqno; + __u8 data; +} __attribute__ ((packed)); + +struct qeth_snmp_ureq_hdr { + __u32 data_len; + __u32 req_len; + __u32 reserved1; + __u32 reserved2; +} __attribute__ ((packed)); + +struct qeth_snmp_ureq { + struct qeth_snmp_ureq_hdr hdr; + struct qeth_snmp_cmd cmd; +} __attribute__((packed)); + +struct qeth_ipacmd_setadpparms_hdr { + __u32 supp_hw_cmds; + __u32 reserved1; + __u16 cmdlength; + __u16 reserved2; + __u32 command_code; + __u16 return_code; + __u8 used_total; + __u8 seq_no; + __u32 reserved3; +} __attribute__ ((packed)); + +struct qeth_ipacmd_setadpparms { + struct qeth_ipacmd_setadpparms_hdr hdr; + union { + struct qeth_query_cmds_supp query_cmds_supp; + struct qeth_change_addr change_addr; + struct qeth_snmp_cmd snmp; + __u32 mode; + } data; +} __attribute__ ((packed)); + +/* CREATE_ADDR IPA Command: ***********************************************/ +struct qeth_create_destroy_address { + __u8 unique_id[8]; +} __attribute__ ((packed)); + +/* Header for each IPA command */ +struct qeth_ipacmd_hdr { + __u8 command; + __u8 initiator; + __u16 seqno; + __u16 return_code; + __u8 adapter_type; + __u8 rel_adapter_no; + __u8 prim_version_no; + __u8 param_count; + __u16 prot_version; + __u32 ipa_supported; + __u32 ipa_enabled; +} __attribute__ ((packed)); + +/* The IPA command itself */ +struct qeth_ipa_cmd { + struct qeth_ipacmd_hdr hdr; + union { + struct qeth_ipacmd_setdelip4 setdelip4; + struct qeth_ipacmd_setdelip6 setdelip6; + struct qeth_ipacmd_setdelipm setdelipm; + struct qeth_ipacmd_setassparms setassparms; + struct qeth_ipacmd_layer2setdelmac setdelmac; + struct qeth_ipacmd_layer2setdelvlan setdelvlan; + struct qeth_create_destroy_address create_destroy_addr; + struct qeth_ipacmd_setadpparms setadapterparms; + struct qeth_set_routing setrtg; + } data; +} __attribute__ ((packed)); + +/* + * special command for ARP processing. + * this is not included in setassparms command before, because we get + * problem with the size of struct qeth_ipacmd_setassparms otherwise + */ +enum qeth_ipa_arp_return_codes { + QETH_IPA_ARP_RC_SUCCESS = 0x0000, + QETH_IPA_ARP_RC_FAILED = 0x0001, + QETH_IPA_ARP_RC_NOTSUPP = 0x0002, + QETH_IPA_ARP_RC_OUT_OF_RANGE = 0x0003, + QETH_IPA_ARP_RC_Q_NOTSUPP = 0x0004, + QETH_IPA_ARP_RC_Q_NO_DATA = 0x0008, +}; + + +extern char *qeth_get_ipa_msg(enum qeth_ipa_return_codes rc); +extern char *qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd); + +#define QETH_SETASS_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \ + sizeof(struct qeth_ipacmd_setassparms_hdr)) +#define QETH_IPA_ARP_DATA_POS(buffer) (buffer + IPA_PDU_HEADER_SIZE + \ + QETH_SETASS_BASE_LEN) +#define QETH_SETADP_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \ + sizeof(struct qeth_ipacmd_setadpparms_hdr)) +#define QETH_SNMP_SETADP_CMDLENGTH 16 + +#define QETH_ARP_DATA_SIZE 3968 +#define QETH_ARP_CMD_LEN (QETH_ARP_DATA_SIZE + 8) +/* Helper functions */ +#define IS_IPA_REPLY(cmd) ((cmd->hdr.initiator == IPA_CMD_INITIATOR_HOST) || \ + (cmd->hdr.initiator == IPA_CMD_INITIATOR_OSA_REPLY)) + +/*****************************************************************************/ +/* END OF IP Assist related definitions */ +/*****************************************************************************/ + + +extern unsigned char WRITE_CCW[]; +extern unsigned char READ_CCW[]; + +extern unsigned char CM_ENABLE[]; +#define CM_ENABLE_SIZE 0x63 +#define QETH_CM_ENABLE_ISSUER_RM_TOKEN(buffer) (buffer + 0x2c) +#define QETH_CM_ENABLE_FILTER_TOKEN(buffer) (buffer + 0x53) +#define QETH_CM_ENABLE_USER_DATA(buffer) (buffer + 0x5b) + +#define QETH_CM_ENABLE_RESP_FILTER_TOKEN(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x13) + + +extern unsigned char CM_SETUP[]; +#define CM_SETUP_SIZE 0x64 +#define QETH_CM_SETUP_DEST_ADDR(buffer) (buffer + 0x2c) +#define QETH_CM_SETUP_CONNECTION_TOKEN(buffer) (buffer + 0x51) +#define QETH_CM_SETUP_FILTER_TOKEN(buffer) (buffer + 0x5a) + +#define QETH_CM_SETUP_RESP_DEST_ADDR(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x1a) + +extern unsigned char ULP_ENABLE[]; +#define ULP_ENABLE_SIZE 0x6b +#define QETH_ULP_ENABLE_LINKNUM(buffer) (buffer + 0x61) +#define QETH_ULP_ENABLE_DEST_ADDR(buffer) (buffer + 0x2c) +#define QETH_ULP_ENABLE_FILTER_TOKEN(buffer) (buffer + 0x53) +#define QETH_ULP_ENABLE_PORTNAME_AND_LL(buffer) (buffer + 0x62) +#define QETH_ULP_ENABLE_RESP_FILTER_TOKEN(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x13) +#define QETH_ULP_ENABLE_RESP_MAX_MTU(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x1f) +#define QETH_ULP_ENABLE_RESP_DIFINFO_LEN(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x17) +#define QETH_ULP_ENABLE_RESP_LINK_TYPE(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x2b) +/* Layer 2 defintions */ +#define QETH_PROT_LAYER2 0x08 +#define QETH_PROT_TCPIP 0x03 +#define QETH_PROT_OSN2 0x0a +#define QETH_ULP_ENABLE_PROT_TYPE(buffer) (buffer + 0x50) +#define QETH_IPA_CMD_PROT_TYPE(buffer) (buffer + 0x19) + +extern unsigned char ULP_SETUP[]; +#define ULP_SETUP_SIZE 0x6c +#define QETH_ULP_SETUP_DEST_ADDR(buffer) (buffer + 0x2c) +#define QETH_ULP_SETUP_CONNECTION_TOKEN(buffer) (buffer + 0x51) +#define QETH_ULP_SETUP_FILTER_TOKEN(buffer) (buffer + 0x5a) +#define QETH_ULP_SETUP_CUA(buffer) (buffer + 0x68) +#define QETH_ULP_SETUP_REAL_DEVADDR(buffer) (buffer + 0x6a) + +#define QETH_ULP_SETUP_RESP_CONNECTION_TOKEN(buffer) \ + (PDU_ENCAPSULATION(buffer) + 0x1a) + + +extern unsigned char DM_ACT[]; +#define DM_ACT_SIZE 0x55 +#define QETH_DM_ACT_DEST_ADDR(buffer) (buffer + 0x2c) +#define QETH_DM_ACT_CONNECTION_TOKEN(buffer) (buffer + 0x51) + + + +#define QETH_TRANSPORT_HEADER_SEQ_NO(buffer) (buffer + 4) +#define QETH_PDU_HEADER_SEQ_NO(buffer) (buffer + 0x1c) +#define QETH_PDU_HEADER_ACK_SEQ_NO(buffer) (buffer + 0x20) + +extern unsigned char IDX_ACTIVATE_READ[]; +extern unsigned char IDX_ACTIVATE_WRITE[]; + +#define IDX_ACTIVATE_SIZE 0x22 +#define QETH_IDX_ACT_PNO(buffer) (buffer+0x0b) +#define QETH_IDX_ACT_ISSUER_RM_TOKEN(buffer) (buffer + 0x0c) +#define QETH_IDX_NO_PORTNAME_REQUIRED(buffer) ((buffer)[0x0b] & 0x80) +#define QETH_IDX_ACT_FUNC_LEVEL(buffer) (buffer + 0x10) +#define QETH_IDX_ACT_DATASET_NAME(buffer) (buffer + 0x16) +#define QETH_IDX_ACT_QDIO_DEV_CUA(buffer) (buffer + 0x1e) +#define QETH_IDX_ACT_QDIO_DEV_REALADDR(buffer) (buffer + 0x20) +#define QETH_IS_IDX_ACT_POS_REPLY(buffer) (((buffer)[0x08] & 3) == 2) +#define QETH_IDX_REPLY_LEVEL(buffer) (buffer + 0x12) +#define QETH_IDX_ACT_CAUSE_CODE(buffer) (buffer)[0x09] + +#define PDU_ENCAPSULATION(buffer) \ + (buffer + *(buffer + (*(buffer + 0x0b)) + \ + *(buffer + *(buffer + 0x0b) + 0x11) + 0x07)) + +#define IS_IPA(buffer) \ + ((buffer) && \ + (*(buffer + ((*(buffer + 0x0b)) + 4)) == 0xc1)) + +#define ADDR_FRAME_TYPE_DIX 1 +#define ADDR_FRAME_TYPE_802_3 2 +#define ADDR_FRAME_TYPE_TR_WITHOUT_SR 0x10 +#define ADDR_FRAME_TYPE_TR_WITH_SR 0x20 + +#endif diff --git a/drivers/s390/net/qeth_core_offl.c b/drivers/s390/net/qeth_core_offl.c new file mode 100644 index 000000000000..8b407d6a83cf --- /dev/null +++ b/drivers/s390/net/qeth_core_offl.c @@ -0,0 +1,701 @@ +/* + * drivers/s390/net/qeth_core_offl.c + * + * Copyright IBM Corp. 2007 + * Author(s): Thomas Spatzier , + * Frank Blaschka + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "qeth_core.h" +#include "qeth_core_mpc.h" +#include "qeth_core_offl.h" + +int qeth_eddp_check_buffers_for_context(struct qeth_qdio_out_q *queue, + struct qeth_eddp_context *ctx) +{ + int index = queue->next_buf_to_fill; + int elements_needed = ctx->num_elements; + int elements_in_buffer; + int skbs_in_buffer; + int buffers_needed = 0; + + QETH_DBF_TEXT(trace, 5, "eddpcbfc"); + while (elements_needed > 0) { + buffers_needed++; + if (atomic_read(&queue->bufs[index].state) != + QETH_QDIO_BUF_EMPTY) + return -EBUSY; + + elements_in_buffer = QETH_MAX_BUFFER_ELEMENTS(queue->card) - + queue->bufs[index].next_element_to_fill; + skbs_in_buffer = elements_in_buffer / ctx->elements_per_skb; + elements_needed -= skbs_in_buffer * ctx->elements_per_skb; + index = (index + 1) % QDIO_MAX_BUFFERS_PER_Q; + } + return buffers_needed; +} + +static void qeth_eddp_free_context(struct qeth_eddp_context *ctx) +{ + int i; + + QETH_DBF_TEXT(trace, 5, "eddpfctx"); + for (i = 0; i < ctx->num_pages; ++i) + free_page((unsigned long)ctx->pages[i]); + kfree(ctx->pages); + kfree(ctx->elements); + kfree(ctx); +} + + +static void qeth_eddp_get_context(struct qeth_eddp_context *ctx) +{ + atomic_inc(&ctx->refcnt); +} + +void qeth_eddp_put_context(struct qeth_eddp_context *ctx) +{ + if (atomic_dec_return(&ctx->refcnt) == 0) + qeth_eddp_free_context(ctx); +} +EXPORT_SYMBOL_GPL(qeth_eddp_put_context); + +void qeth_eddp_buf_release_contexts(struct qeth_qdio_out_buffer *buf) +{ + struct qeth_eddp_context_reference *ref; + + QETH_DBF_TEXT(trace, 6, "eddprctx"); + while (!list_empty(&buf->ctx_list)) { + ref = list_entry(buf->ctx_list.next, + struct qeth_eddp_context_reference, list); + qeth_eddp_put_context(ref->ctx); + list_del(&ref->list); + kfree(ref); + } +} + +static int qeth_eddp_buf_ref_context(struct qeth_qdio_out_buffer *buf, + struct qeth_eddp_context *ctx) +{ + struct qeth_eddp_context_reference *ref; + + QETH_DBF_TEXT(trace, 6, "eddprfcx"); + ref = kmalloc(sizeof(struct qeth_eddp_context_reference), GFP_ATOMIC); + if (ref == NULL) + return -ENOMEM; + qeth_eddp_get_context(ctx); + ref->ctx = ctx; + list_add_tail(&ref->list, &buf->ctx_list); + return 0; +} + +int qeth_eddp_fill_buffer(struct qeth_qdio_out_q *queue, + struct qeth_eddp_context *ctx, int index) +{ + struct qeth_qdio_out_buffer *buf = NULL; + struct qdio_buffer *buffer; + int elements = ctx->num_elements; + int element = 0; + int flush_cnt = 0; + int must_refcnt = 1; + int i; + + QETH_DBF_TEXT(trace, 5, "eddpfibu"); + while (elements > 0) { + buf = &queue->bufs[index]; + if (atomic_read(&buf->state) != QETH_QDIO_BUF_EMPTY) { + /* normally this should not happen since we checked for + * available elements in qeth_check_elements_for_context + */ + if (element == 0) + return -EBUSY; + else { + PRINT_WARN("could only partially fill eddp " + "buffer!\n"); + goto out; + } + } + /* check if the whole next skb fits into current buffer */ + if ((QETH_MAX_BUFFER_ELEMENTS(queue->card) - + buf->next_element_to_fill) + < ctx->elements_per_skb){ + /* no -> go to next buffer */ + atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); + index = (index + 1) % QDIO_MAX_BUFFERS_PER_Q; + flush_cnt++; + /* new buffer, so we have to add ctx to buffer'ctx_list + * and increment ctx's refcnt */ + must_refcnt = 1; + continue; + } + if (must_refcnt) { + must_refcnt = 0; + if (qeth_eddp_buf_ref_context(buf, ctx)) { + PRINT_WARN("no memory to create eddp context " + "reference\n"); + goto out_check; + } + } + buffer = buf->buffer; + /* fill one skb into buffer */ + for (i = 0; i < ctx->elements_per_skb; ++i) { + if (ctx->elements[element].length != 0) { + buffer->element[buf->next_element_to_fill]. + addr = ctx->elements[element].addr; + buffer->element[buf->next_element_to_fill]. + length = ctx->elements[element].length; + buffer->element[buf->next_element_to_fill]. + flags = ctx->elements[element].flags; + buf->next_element_to_fill++; + } + element++; + elements--; + } + } +out_check: + if (!queue->do_pack) { + QETH_DBF_TEXT(trace, 6, "fillbfnp"); + /* set state to PRIMED -> will be flushed */ + if (buf->next_element_to_fill > 0) { + atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); + flush_cnt++; + } + } else { + if (queue->card->options.performance_stats) + queue->card->perf_stats.skbs_sent_pack++; + QETH_DBF_TEXT(trace, 6, "fillbfpa"); + if (buf->next_element_to_fill >= + QETH_MAX_BUFFER_ELEMENTS(queue->card)) { + /* + * packed buffer if full -> set state PRIMED + * -> will be flushed + */ + atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); + flush_cnt++; + } + } +out: + return flush_cnt; +} + +static void qeth_eddp_create_segment_hdrs(struct qeth_eddp_context *ctx, + struct qeth_eddp_data *eddp, int data_len) +{ + u8 *page; + int page_remainder; + int page_offset; + int pkt_len; + struct qeth_eddp_element *element; + + QETH_DBF_TEXT(trace, 5, "eddpcrsh"); + page = ctx->pages[ctx->offset >> PAGE_SHIFT]; + page_offset = ctx->offset % PAGE_SIZE; + element = &ctx->elements[ctx->num_elements]; + pkt_len = eddp->nhl + eddp->thl + data_len; + /* FIXME: layer2 and VLAN !!! */ + if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) + pkt_len += ETH_HLEN; + if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) + pkt_len += VLAN_HLEN; + /* does complete packet fit in current page ? */ + page_remainder = PAGE_SIZE - page_offset; + if (page_remainder < (sizeof(struct qeth_hdr) + pkt_len)) { + /* no -> go to start of next page */ + ctx->offset += page_remainder; + page = ctx->pages[ctx->offset >> PAGE_SHIFT]; + page_offset = 0; + } + memcpy(page + page_offset, &eddp->qh, sizeof(struct qeth_hdr)); + element->addr = page + page_offset; + element->length = sizeof(struct qeth_hdr); + ctx->offset += sizeof(struct qeth_hdr); + page_offset += sizeof(struct qeth_hdr); + /* add mac header (?) */ + if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { + memcpy(page + page_offset, &eddp->mac, ETH_HLEN); + element->length += ETH_HLEN; + ctx->offset += ETH_HLEN; + page_offset += ETH_HLEN; + } + /* add VLAN tag */ + if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) { + memcpy(page + page_offset, &eddp->vlan, VLAN_HLEN); + element->length += VLAN_HLEN; + ctx->offset += VLAN_HLEN; + page_offset += VLAN_HLEN; + } + /* add network header */ + memcpy(page + page_offset, (u8 *)&eddp->nh, eddp->nhl); + element->length += eddp->nhl; + eddp->nh_in_ctx = page + page_offset; + ctx->offset += eddp->nhl; + page_offset += eddp->nhl; + /* add transport header */ + memcpy(page + page_offset, (u8 *)&eddp->th, eddp->thl); + element->length += eddp->thl; + eddp->th_in_ctx = page + page_offset; + ctx->offset += eddp->thl; +} + +static void qeth_eddp_copy_data_tcp(char *dst, struct qeth_eddp_data *eddp, + int len, __wsum *hcsum) +{ + struct skb_frag_struct *frag; + int left_in_frag; + int copy_len; + u8 *src; + + QETH_DBF_TEXT(trace, 5, "eddpcdtc"); + if (skb_shinfo(eddp->skb)->nr_frags == 0) { + skb_copy_from_linear_data_offset(eddp->skb, eddp->skb_offset, + dst, len); + *hcsum = csum_partial(eddp->skb->data + eddp->skb_offset, len, + *hcsum); + eddp->skb_offset += len; + } else { + while (len > 0) { + if (eddp->frag < 0) { + /* we're in skb->data */ + left_in_frag = (eddp->skb->len - + eddp->skb->data_len) + - eddp->skb_offset; + src = eddp->skb->data + eddp->skb_offset; + } else { + frag = &skb_shinfo(eddp->skb)->frags[ + eddp->frag]; + left_in_frag = frag->size - eddp->frag_offset; + src = (u8 *)((page_to_pfn(frag->page) << + PAGE_SHIFT) + frag->page_offset + + eddp->frag_offset); + } + if (left_in_frag <= 0) { + eddp->frag++; + eddp->frag_offset = 0; + continue; + } + copy_len = min(left_in_frag, len); + memcpy(dst, src, copy_len); + *hcsum = csum_partial(src, copy_len, *hcsum); + dst += copy_len; + eddp->frag_offset += copy_len; + eddp->skb_offset += copy_len; + len -= copy_len; + } + } +} + +static void qeth_eddp_create_segment_data_tcp(struct qeth_eddp_context *ctx, + struct qeth_eddp_data *eddp, int data_len, __wsum hcsum) +{ + u8 *page; + int page_remainder; + int page_offset; + struct qeth_eddp_element *element; + int first_lap = 1; + + QETH_DBF_TEXT(trace, 5, "eddpcsdt"); + page = ctx->pages[ctx->offset >> PAGE_SHIFT]; + page_offset = ctx->offset % PAGE_SIZE; + element = &ctx->elements[ctx->num_elements]; + while (data_len) { + page_remainder = PAGE_SIZE - page_offset; + if (page_remainder < data_len) { + qeth_eddp_copy_data_tcp(page + page_offset, eddp, + page_remainder, &hcsum); + element->length += page_remainder; + if (first_lap) + element->flags = SBAL_FLAGS_FIRST_FRAG; + else + element->flags = SBAL_FLAGS_MIDDLE_FRAG; + ctx->num_elements++; + element++; + data_len -= page_remainder; + ctx->offset += page_remainder; + page = ctx->pages[ctx->offset >> PAGE_SHIFT]; + page_offset = 0; + element->addr = page + page_offset; + } else { + qeth_eddp_copy_data_tcp(page + page_offset, eddp, + data_len, &hcsum); + element->length += data_len; + if (!first_lap) + element->flags = SBAL_FLAGS_LAST_FRAG; + ctx->num_elements++; + ctx->offset += data_len; + data_len = 0; + } + first_lap = 0; + } + ((struct tcphdr *)eddp->th_in_ctx)->check = csum_fold(hcsum); +} + +static __wsum qeth_eddp_check_tcp4_hdr(struct qeth_eddp_data *eddp, + int data_len) +{ + __wsum phcsum; /* pseudo header checksum */ + + QETH_DBF_TEXT(trace, 5, "eddpckt4"); + eddp->th.tcp.h.check = 0; + /* compute pseudo header checksum */ + phcsum = csum_tcpudp_nofold(eddp->nh.ip4.h.saddr, eddp->nh.ip4.h.daddr, + eddp->thl + data_len, IPPROTO_TCP, 0); + /* compute checksum of tcp header */ + return csum_partial((u8 *)&eddp->th, eddp->thl, phcsum); +} + +static __wsum qeth_eddp_check_tcp6_hdr(struct qeth_eddp_data *eddp, + int data_len) +{ + __be32 proto; + __wsum phcsum; /* pseudo header checksum */ + + QETH_DBF_TEXT(trace, 5, "eddpckt6"); + eddp->th.tcp.h.check = 0; + /* compute pseudo header checksum */ + phcsum = csum_partial((u8 *)&eddp->nh.ip6.h.saddr, + sizeof(struct in6_addr), 0); + phcsum = csum_partial((u8 *)&eddp->nh.ip6.h.daddr, + sizeof(struct in6_addr), phcsum); + proto = htonl(IPPROTO_TCP); + phcsum = csum_partial((u8 *)&proto, sizeof(u32), phcsum); + return phcsum; +} + +static struct qeth_eddp_data *qeth_eddp_create_eddp_data(struct qeth_hdr *qh, + u8 *nh, u8 nhl, u8 *th, u8 thl) +{ + struct qeth_eddp_data *eddp; + + QETH_DBF_TEXT(trace, 5, "eddpcrda"); + eddp = kzalloc(sizeof(struct qeth_eddp_data), GFP_ATOMIC); + if (eddp) { + eddp->nhl = nhl; + eddp->thl = thl; + memcpy(&eddp->qh, qh, sizeof(struct qeth_hdr)); + memcpy(&eddp->nh, nh, nhl); + memcpy(&eddp->th, th, thl); + eddp->frag = -1; /* initially we're in skb->data */ + } + return eddp; +} + +static void __qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, + struct qeth_eddp_data *eddp) +{ + struct tcphdr *tcph; + int data_len; + __wsum hcsum; + + QETH_DBF_TEXT(trace, 5, "eddpftcp"); + eddp->skb_offset = sizeof(struct qeth_hdr) + eddp->nhl + eddp->thl; + if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { + eddp->skb_offset += sizeof(struct ethhdr); + if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) + eddp->skb_offset += VLAN_HLEN; + } + tcph = tcp_hdr(eddp->skb); + while (eddp->skb_offset < eddp->skb->len) { + data_len = min((int)skb_shinfo(eddp->skb)->gso_size, + (int)(eddp->skb->len - eddp->skb_offset)); + /* prepare qdio hdr */ + if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { + eddp->qh.hdr.l2.pkt_length = data_len + ETH_HLEN + + eddp->nhl + eddp->thl; + if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) + eddp->qh.hdr.l2.pkt_length += VLAN_HLEN; + } else + eddp->qh.hdr.l3.length = data_len + eddp->nhl + + eddp->thl; + /* prepare ip hdr */ + if (eddp->skb->protocol == htons(ETH_P_IP)) { + eddp->nh.ip4.h.tot_len = htons(data_len + eddp->nhl + + eddp->thl); + eddp->nh.ip4.h.check = 0; + eddp->nh.ip4.h.check = + ip_fast_csum((u8 *)&eddp->nh.ip4.h, + eddp->nh.ip4.h.ihl); + } else + eddp->nh.ip6.h.payload_len = htons(data_len + + eddp->thl); + /* prepare tcp hdr */ + if (data_len == (eddp->skb->len - eddp->skb_offset)) { + /* last segment -> set FIN and PSH flags */ + eddp->th.tcp.h.fin = tcph->fin; + eddp->th.tcp.h.psh = tcph->psh; + } + if (eddp->skb->protocol == htons(ETH_P_IP)) + hcsum = qeth_eddp_check_tcp4_hdr(eddp, data_len); + else + hcsum = qeth_eddp_check_tcp6_hdr(eddp, data_len); + /* fill the next segment into the context */ + qeth_eddp_create_segment_hdrs(ctx, eddp, data_len); + qeth_eddp_create_segment_data_tcp(ctx, eddp, data_len, hcsum); + if (eddp->skb_offset >= eddp->skb->len) + break; + /* prepare headers for next round */ + if (eddp->skb->protocol == htons(ETH_P_IP)) + eddp->nh.ip4.h.id = htons(ntohs(eddp->nh.ip4.h.id) + 1); + eddp->th.tcp.h.seq = htonl(ntohl(eddp->th.tcp.h.seq) + + data_len); + } +} + +static int qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, + struct sk_buff *skb, struct qeth_hdr *qhdr) +{ + struct qeth_eddp_data *eddp = NULL; + + QETH_DBF_TEXT(trace, 5, "eddpficx"); + /* create our segmentation headers and copy original headers */ + if (skb->protocol == htons(ETH_P_IP)) + eddp = qeth_eddp_create_eddp_data(qhdr, + skb_network_header(skb), + ip_hdrlen(skb), + skb_transport_header(skb), + tcp_hdrlen(skb)); + else + eddp = qeth_eddp_create_eddp_data(qhdr, + skb_network_header(skb), + sizeof(struct ipv6hdr), + skb_transport_header(skb), + tcp_hdrlen(skb)); + + if (eddp == NULL) { + QETH_DBF_TEXT(trace, 2, "eddpfcnm"); + return -ENOMEM; + } + if (qhdr->hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { + skb_set_mac_header(skb, sizeof(struct qeth_hdr)); + memcpy(&eddp->mac, eth_hdr(skb), ETH_HLEN); + if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) { + eddp->vlan[0] = skb->protocol; + eddp->vlan[1] = htons(vlan_tx_tag_get(skb)); + } + } + /* the next flags will only be set on the last segment */ + eddp->th.tcp.h.fin = 0; + eddp->th.tcp.h.psh = 0; + eddp->skb = skb; + /* begin segmentation and fill context */ + __qeth_eddp_fill_context_tcp(ctx, eddp); + kfree(eddp); + return 0; +} + +static void qeth_eddp_calc_num_pages(struct qeth_eddp_context *ctx, + struct sk_buff *skb, int hdr_len) +{ + int skbs_per_page; + + QETH_DBF_TEXT(trace, 5, "eddpcanp"); + /* can we put multiple skbs in one page? */ + skbs_per_page = PAGE_SIZE / (skb_shinfo(skb)->gso_size + hdr_len); + if (skbs_per_page > 1) { + ctx->num_pages = (skb_shinfo(skb)->gso_segs + 1) / + skbs_per_page + 1; + ctx->elements_per_skb = 1; + } else { + /* no -> how many elements per skb? */ + ctx->elements_per_skb = (skb_shinfo(skb)->gso_size + hdr_len + + PAGE_SIZE) >> PAGE_SHIFT; + ctx->num_pages = ctx->elements_per_skb * + (skb_shinfo(skb)->gso_segs + 1); + } + ctx->num_elements = ctx->elements_per_skb * + (skb_shinfo(skb)->gso_segs + 1); +} + +static struct qeth_eddp_context *qeth_eddp_create_context_generic( + struct qeth_card *card, struct sk_buff *skb, int hdr_len) +{ + struct qeth_eddp_context *ctx = NULL; + u8 *addr; + int i; + + QETH_DBF_TEXT(trace, 5, "creddpcg"); + /* create the context and allocate pages */ + ctx = kzalloc(sizeof(struct qeth_eddp_context), GFP_ATOMIC); + if (ctx == NULL) { + QETH_DBF_TEXT(trace, 2, "ceddpcn1"); + return NULL; + } + ctx->type = QETH_LARGE_SEND_EDDP; + qeth_eddp_calc_num_pages(ctx, skb, hdr_len); + if (ctx->elements_per_skb > QETH_MAX_BUFFER_ELEMENTS(card)) { + QETH_DBF_TEXT(trace, 2, "ceddpcis"); + kfree(ctx); + return NULL; + } + ctx->pages = kcalloc(ctx->num_pages, sizeof(u8 *), GFP_ATOMIC); + if (ctx->pages == NULL) { + QETH_DBF_TEXT(trace, 2, "ceddpcn2"); + kfree(ctx); + return NULL; + } + for (i = 0; i < ctx->num_pages; ++i) { + addr = (u8 *)get_zeroed_page(GFP_ATOMIC); + if (addr == NULL) { + QETH_DBF_TEXT(trace, 2, "ceddpcn3"); + ctx->num_pages = i; + qeth_eddp_free_context(ctx); + return NULL; + } + ctx->pages[i] = addr; + } + ctx->elements = kcalloc(ctx->num_elements, + sizeof(struct qeth_eddp_element), GFP_ATOMIC); + if (ctx->elements == NULL) { + QETH_DBF_TEXT(trace, 2, "ceddpcn4"); + qeth_eddp_free_context(ctx); + return NULL; + } + /* reset num_elements; will be incremented again in fill_buffer to + * reflect number of actually used elements */ + ctx->num_elements = 0; + return ctx; +} + +static struct qeth_eddp_context *qeth_eddp_create_context_tcp( + struct qeth_card *card, struct sk_buff *skb, + struct qeth_hdr *qhdr) +{ + struct qeth_eddp_context *ctx = NULL; + + QETH_DBF_TEXT(trace, 5, "creddpct"); + if (skb->protocol == htons(ETH_P_IP)) + ctx = qeth_eddp_create_context_generic(card, skb, + (sizeof(struct qeth_hdr) + + ip_hdrlen(skb) + + tcp_hdrlen(skb))); + else if (skb->protocol == htons(ETH_P_IPV6)) + ctx = qeth_eddp_create_context_generic(card, skb, + sizeof(struct qeth_hdr) + sizeof(struct ipv6hdr) + + tcp_hdrlen(skb)); + else + QETH_DBF_TEXT(trace, 2, "cetcpinv"); + + if (ctx == NULL) { + QETH_DBF_TEXT(trace, 2, "creddpnl"); + return NULL; + } + if (qeth_eddp_fill_context_tcp(ctx, skb, qhdr)) { + QETH_DBF_TEXT(trace, 2, "ceddptfe"); + qeth_eddp_free_context(ctx); + return NULL; + } + atomic_set(&ctx->refcnt, 1); + return ctx; +} + +struct qeth_eddp_context *qeth_eddp_create_context(struct qeth_card *card, + struct sk_buff *skb, struct qeth_hdr *qhdr, + unsigned char sk_protocol) +{ + QETH_DBF_TEXT(trace, 5, "creddpc"); + switch (sk_protocol) { + case IPPROTO_TCP: + return qeth_eddp_create_context_tcp(card, skb, qhdr); + default: + QETH_DBF_TEXT(trace, 2, "eddpinvp"); + } + return NULL; +} +EXPORT_SYMBOL_GPL(qeth_eddp_create_context); + +void qeth_tso_fill_header(struct qeth_card *card, struct qeth_hdr *qhdr, + struct sk_buff *skb) +{ + struct qeth_hdr_tso *hdr = (struct qeth_hdr_tso *)qhdr; + struct tcphdr *tcph = tcp_hdr(skb); + struct iphdr *iph = ip_hdr(skb); + struct ipv6hdr *ip6h = ipv6_hdr(skb); + + QETH_DBF_TEXT(trace, 5, "tsofhdr"); + + /*fix header to TSO values ...*/ + hdr->hdr.hdr.l3.id = QETH_HEADER_TYPE_TSO; + /*set values which are fix for the first approach ...*/ + hdr->ext.hdr_tot_len = (__u16) sizeof(struct qeth_hdr_ext_tso); + hdr->ext.imb_hdr_no = 1; + hdr->ext.hdr_type = 1; + hdr->ext.hdr_version = 1; + hdr->ext.hdr_len = 28; + /*insert non-fix values */ + hdr->ext.mss = skb_shinfo(skb)->gso_size; + hdr->ext.dg_hdr_len = (__u16)(iph->ihl*4 + tcph->doff*4); + hdr->ext.payload_len = (__u16)(skb->len - hdr->ext.dg_hdr_len - + sizeof(struct qeth_hdr_tso)); + tcph->check = 0; + if (skb->protocol == ETH_P_IPV6) { + ip6h->payload_len = 0; + tcph->check = ~csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr, + 0, IPPROTO_TCP, 0); + } else { + /*OSA want us to set these values ...*/ + tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, + 0, IPPROTO_TCP, 0); + iph->tot_len = 0; + iph->check = 0; + } +} +EXPORT_SYMBOL_GPL(qeth_tso_fill_header); + +void qeth_tx_csum(struct sk_buff *skb) +{ + int tlen; + if (skb->protocol == htons(ETH_P_IP)) { + tlen = ntohs(ip_hdr(skb)->tot_len) - (ip_hdr(skb)->ihl << 2); + switch (ip_hdr(skb)->protocol) { + case IPPROTO_TCP: + tcp_hdr(skb)->check = 0; + tcp_hdr(skb)->check = csum_tcpudp_magic( + ip_hdr(skb)->saddr, ip_hdr(skb)->daddr, + tlen, ip_hdr(skb)->protocol, + skb_checksum(skb, skb_transport_offset(skb), + tlen, 0)); + break; + case IPPROTO_UDP: + udp_hdr(skb)->check = 0; + udp_hdr(skb)->check = csum_tcpudp_magic( + ip_hdr(skb)->saddr, ip_hdr(skb)->daddr, + tlen, ip_hdr(skb)->protocol, + skb_checksum(skb, skb_transport_offset(skb), + tlen, 0)); + break; + } + } else if (skb->protocol == htons(ETH_P_IPV6)) { + switch (ipv6_hdr(skb)->nexthdr) { + case IPPROTO_TCP: + tcp_hdr(skb)->check = 0; + tcp_hdr(skb)->check = csum_ipv6_magic( + &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr, + ipv6_hdr(skb)->payload_len, + ipv6_hdr(skb)->nexthdr, + skb_checksum(skb, skb_transport_offset(skb), + ipv6_hdr(skb)->payload_len, 0)); + break; + case IPPROTO_UDP: + udp_hdr(skb)->check = 0; + udp_hdr(skb)->check = csum_ipv6_magic( + &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr, + ipv6_hdr(skb)->payload_len, + ipv6_hdr(skb)->nexthdr, + skb_checksum(skb, skb_transport_offset(skb), + ipv6_hdr(skb)->payload_len, 0)); + break; + } + } +} +EXPORT_SYMBOL_GPL(qeth_tx_csum); diff --git a/drivers/s390/net/qeth_core_offl.h b/drivers/s390/net/qeth_core_offl.h new file mode 100644 index 000000000000..86bf7df8cf16 --- /dev/null +++ b/drivers/s390/net/qeth_core_offl.h @@ -0,0 +1,76 @@ +/* + * drivers/s390/net/qeth_core_offl.h + * + * Copyright IBM Corp. 2007 + * Author(s): Thomas Spatzier , + * Frank Blaschka + */ + +#ifndef __QETH_CORE_OFFL_H__ +#define __QETH_CORE_OFFL_H__ + +struct qeth_eddp_element { + u32 flags; + u32 length; + void *addr; +}; + +struct qeth_eddp_context { + atomic_t refcnt; + enum qeth_large_send_types type; + int num_pages; /* # of allocated pages */ + u8 **pages; /* pointers to pages */ + int offset; /* offset in ctx during creation */ + int num_elements; /* # of required 'SBALEs' */ + struct qeth_eddp_element *elements; /* array of 'SBALEs' */ + int elements_per_skb; /* # of 'SBALEs' per skb **/ +}; + +struct qeth_eddp_context_reference { + struct list_head list; + struct qeth_eddp_context *ctx; +}; + +struct qeth_eddp_data { + struct qeth_hdr qh; + struct ethhdr mac; + __be16 vlan[2]; + union { + struct { + struct iphdr h; + u8 options[40]; + } ip4; + struct { + struct ipv6hdr h; + } ip6; + } nh; + u8 nhl; + void *nh_in_ctx; /* address of nh within the ctx */ + union { + struct { + struct tcphdr h; + u8 options[40]; + } tcp; + } th; + u8 thl; + void *th_in_ctx; /* address of th within the ctx */ + struct sk_buff *skb; + int skb_offset; + int frag; + int frag_offset; +} __attribute__ ((packed)); + +extern struct qeth_eddp_context *qeth_eddp_create_context(struct qeth_card *, + struct sk_buff *, struct qeth_hdr *, unsigned char); +extern void qeth_eddp_put_context(struct qeth_eddp_context *); +extern int qeth_eddp_fill_buffer(struct qeth_qdio_out_q *, + struct qeth_eddp_context *, int); +extern void qeth_eddp_buf_release_contexts(struct qeth_qdio_out_buffer *); +extern int qeth_eddp_check_buffers_for_context(struct qeth_qdio_out_q *, + struct qeth_eddp_context *); + +void qeth_tso_fill_header(struct qeth_card *, struct qeth_hdr *, + struct sk_buff *); +void qeth_tx_csum(struct sk_buff *skb); + +#endif /* __QETH_CORE_EDDP_H__ */ diff --git a/drivers/s390/net/qeth_core_sys.c b/drivers/s390/net/qeth_core_sys.c new file mode 100644 index 000000000000..08a50f057284 --- /dev/null +++ b/drivers/s390/net/qeth_core_sys.c @@ -0,0 +1,651 @@ +/* + * drivers/s390/net/qeth_core_sys.c + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#include +#include +#include + +#include "qeth_core.h" + +static ssize_t qeth_dev_state_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + if (!card) + return -EINVAL; + + switch (card->state) { + case CARD_STATE_DOWN: + return sprintf(buf, "DOWN\n"); + case CARD_STATE_HARDSETUP: + return sprintf(buf, "HARDSETUP\n"); + case CARD_STATE_SOFTSETUP: + return sprintf(buf, "SOFTSETUP\n"); + case CARD_STATE_UP: + if (card->lan_online) + return sprintf(buf, "UP (LAN ONLINE)\n"); + else + return sprintf(buf, "UP (LAN OFFLINE)\n"); + case CARD_STATE_RECOVER: + return sprintf(buf, "RECOVER\n"); + default: + return sprintf(buf, "UNKNOWN\n"); + } +} + +static DEVICE_ATTR(state, 0444, qeth_dev_state_show, NULL); + +static ssize_t qeth_dev_chpid_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + if (!card) + return -EINVAL; + + return sprintf(buf, "%02X\n", card->info.chpid); +} + +static DEVICE_ATTR(chpid, 0444, qeth_dev_chpid_show, NULL); + +static ssize_t qeth_dev_if_name_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + if (!card) + return -EINVAL; + return sprintf(buf, "%s\n", QETH_CARD_IFNAME(card)); +} + +static DEVICE_ATTR(if_name, 0444, qeth_dev_if_name_show, NULL); + +static ssize_t qeth_dev_card_type_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + if (!card) + return -EINVAL; + + return sprintf(buf, "%s\n", qeth_get_cardname_short(card)); +} + +static DEVICE_ATTR(card_type, 0444, qeth_dev_card_type_show, NULL); + +static inline const char *qeth_get_bufsize_str(struct qeth_card *card) +{ + if (card->qdio.in_buf_size == 16384) + return "16k"; + else if (card->qdio.in_buf_size == 24576) + return "24k"; + else if (card->qdio.in_buf_size == 32768) + return "32k"; + else if (card->qdio.in_buf_size == 40960) + return "40k"; + else + return "64k"; +} + +static ssize_t qeth_dev_inbuf_size_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + if (!card) + return -EINVAL; + + return sprintf(buf, "%s\n", qeth_get_bufsize_str(card)); +} + +static DEVICE_ATTR(inbuf_size, 0444, qeth_dev_inbuf_size_show, NULL); + +static ssize_t qeth_dev_portno_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->info.portno); +} + +static ssize_t qeth_dev_portno_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + unsigned int portno; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + portno = simple_strtoul(buf, &tmp, 16); + if (portno > QETH_MAX_PORTNO) { + PRINT_WARN("portno 0x%X is out of range\n", portno); + return -EINVAL; + } + + card->info.portno = portno; + return count; +} + +static DEVICE_ATTR(portno, 0644, qeth_dev_portno_show, qeth_dev_portno_store); + +static ssize_t qeth_dev_portname_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char portname[9] = {0, }; + + if (!card) + return -EINVAL; + + if (card->info.portname_required) { + memcpy(portname, card->info.portname + 1, 8); + EBCASC(portname, 8); + return sprintf(buf, "%s\n", portname); + } else + return sprintf(buf, "no portname required\n"); +} + +static ssize_t qeth_dev_portname_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int i; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + tmp = strsep((char **) &buf, "\n"); + if ((strlen(tmp) > 8) || (strlen(tmp) == 0)) + return -EINVAL; + + card->info.portname[0] = strlen(tmp); + /* for beauty reasons */ + for (i = 1; i < 9; i++) + card->info.portname[i] = ' '; + strcpy(card->info.portname + 1, tmp); + ASCEBC(card->info.portname + 1, 8); + + return count; +} + +static DEVICE_ATTR(portname, 0644, qeth_dev_portname_show, + qeth_dev_portname_store); + +static ssize_t qeth_dev_prioqing_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + switch (card->qdio.do_prio_queueing) { + case QETH_PRIO_Q_ING_PREC: + return sprintf(buf, "%s\n", "by precedence"); + case QETH_PRIO_Q_ING_TOS: + return sprintf(buf, "%s\n", "by type of service"); + default: + return sprintf(buf, "always queue %i\n", + card->qdio.default_out_queue); + } +} + +static ssize_t qeth_dev_prioqing_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + /* check if 1920 devices are supported , + * if though we have to permit priority queueing + */ + if (card->qdio.no_out_queues == 1) { + PRINT_WARN("Priority queueing disabled due " + "to hardware limitations!\n"); + card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT; + return -EPERM; + } + + tmp = strsep((char **) &buf, "\n"); + if (!strcmp(tmp, "prio_queueing_prec")) + card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_PREC; + else if (!strcmp(tmp, "prio_queueing_tos")) + card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_TOS; + else if (!strcmp(tmp, "no_prio_queueing:0")) { + card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; + card->qdio.default_out_queue = 0; + } else if (!strcmp(tmp, "no_prio_queueing:1")) { + card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; + card->qdio.default_out_queue = 1; + } else if (!strcmp(tmp, "no_prio_queueing:2")) { + card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; + card->qdio.default_out_queue = 2; + } else if (!strcmp(tmp, "no_prio_queueing:3")) { + card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; + card->qdio.default_out_queue = 3; + } else if (!strcmp(tmp, "no_prio_queueing")) { + card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; + card->qdio.default_out_queue = QETH_DEFAULT_QUEUE; + } else { + PRINT_WARN("Unknown queueing type '%s'\n", tmp); + return -EINVAL; + } + return count; +} + +static DEVICE_ATTR(priority_queueing, 0644, qeth_dev_prioqing_show, + qeth_dev_prioqing_store); + +static ssize_t qeth_dev_bufcnt_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->qdio.in_buf_pool.buf_count); +} + +static ssize_t qeth_dev_bufcnt_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int cnt, old_cnt; + int rc; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + old_cnt = card->qdio.in_buf_pool.buf_count; + cnt = simple_strtoul(buf, &tmp, 10); + cnt = (cnt < QETH_IN_BUF_COUNT_MIN) ? QETH_IN_BUF_COUNT_MIN : + ((cnt > QETH_IN_BUF_COUNT_MAX) ? QETH_IN_BUF_COUNT_MAX : cnt); + if (old_cnt != cnt) { + rc = qeth_realloc_buffer_pool(card, cnt); + if (rc) + PRINT_WARN("Error (%d) while setting " + "buffer count.\n", rc); + } + return count; +} + +static DEVICE_ATTR(buffer_count, 0644, qeth_dev_bufcnt_show, + qeth_dev_bufcnt_store); + +static ssize_t qeth_dev_recover_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int i; + + if (!card) + return -EINVAL; + + if (card->state != CARD_STATE_UP) + return -EPERM; + + i = simple_strtoul(buf, &tmp, 16); + if (i == 1) + qeth_schedule_recovery(card); + + return count; +} + +static DEVICE_ATTR(recover, 0200, NULL, qeth_dev_recover_store); + +static ssize_t qeth_dev_performance_stats_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->options.performance_stats ? 1:0); +} + +static ssize_t qeth_dev_performance_stats_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int i; + + if (!card) + return -EINVAL; + + i = simple_strtoul(buf, &tmp, 16); + if ((i == 0) || (i == 1)) { + if (i == card->options.performance_stats) + return count; + card->options.performance_stats = i; + if (i == 0) + memset(&card->perf_stats, 0, + sizeof(struct qeth_perf_stats)); + card->perf_stats.initial_rx_packets = card->stats.rx_packets; + card->perf_stats.initial_tx_packets = card->stats.tx_packets; + } else { + PRINT_WARN("performance_stats: write 0 or 1 to this file!\n"); + return -EINVAL; + } + return count; +} + +static DEVICE_ATTR(performance_stats, 0644, qeth_dev_performance_stats_show, + qeth_dev_performance_stats_store); + +static ssize_t qeth_dev_layer2_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->options.layer2 ? 1:0); +} + +static ssize_t qeth_dev_layer2_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int i, rc; + enum qeth_discipline_id newdis; + + if (!card) + return -EINVAL; + + if (((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER))) + return -EPERM; + + i = simple_strtoul(buf, &tmp, 16); + switch (i) { + case 0: + newdis = QETH_DISCIPLINE_LAYER3; + break; + case 1: + newdis = QETH_DISCIPLINE_LAYER2; + break; + default: + PRINT_WARN("layer2: write 0 or 1 to this file!\n"); + return -EINVAL; + } + + if (card->options.layer2 == newdis) { + return count; + } else { + if (card->discipline.ccwgdriver) { + card->discipline.ccwgdriver->remove(card->gdev); + qeth_core_free_discipline(card); + } + } + + rc = qeth_core_load_discipline(card, newdis); + if (rc) + return rc; + + rc = card->discipline.ccwgdriver->probe(card->gdev); + if (rc) + return rc; + return count; +} + +static DEVICE_ATTR(layer2, 0644, qeth_dev_layer2_show, + qeth_dev_layer2_store); + +static ssize_t qeth_dev_large_send_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + switch (card->options.large_send) { + case QETH_LARGE_SEND_NO: + return sprintf(buf, "%s\n", "no"); + case QETH_LARGE_SEND_EDDP: + return sprintf(buf, "%s\n", "EDDP"); + case QETH_LARGE_SEND_TSO: + return sprintf(buf, "%s\n", "TSO"); + default: + return sprintf(buf, "%s\n", "N/A"); + } +} + +static ssize_t qeth_dev_large_send_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + enum qeth_large_send_types type; + int rc = 0; + char *tmp; + + if (!card) + return -EINVAL; + tmp = strsep((char **) &buf, "\n"); + if (!strcmp(tmp, "no")) { + type = QETH_LARGE_SEND_NO; + } else if (!strcmp(tmp, "EDDP")) { + type = QETH_LARGE_SEND_EDDP; + } else if (!strcmp(tmp, "TSO")) { + type = QETH_LARGE_SEND_TSO; + } else { + PRINT_WARN("large_send: invalid mode %s!\n", tmp); + return -EINVAL; + } + if (card->options.large_send == type) + return count; + rc = qeth_set_large_send(card, type); + if (rc) + return rc; + return count; +} + +static DEVICE_ATTR(large_send, 0644, qeth_dev_large_send_show, + qeth_dev_large_send_store); + +static ssize_t qeth_dev_blkt_show(char *buf, struct qeth_card *card, int value) +{ + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", value); +} + +static ssize_t qeth_dev_blkt_store(struct qeth_card *card, + const char *buf, size_t count, int *value, int max_value) +{ + char *tmp; + int i; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + i = simple_strtoul(buf, &tmp, 10); + if (i <= max_value) { + *value = i; + } else { + PRINT_WARN("blkt total time: write values between" + " 0 and %d to this file!\n", max_value); + return -EINVAL; + } + return count; +} + +static ssize_t qeth_dev_blkt_total_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + return qeth_dev_blkt_show(buf, card, card->info.blkt.time_total); +} + +static ssize_t qeth_dev_blkt_total_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + return qeth_dev_blkt_store(card, buf, count, + &card->info.blkt.time_total, 1000); +} + + + +static DEVICE_ATTR(total, 0644, qeth_dev_blkt_total_show, + qeth_dev_blkt_total_store); + +static ssize_t qeth_dev_blkt_inter_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + return qeth_dev_blkt_show(buf, card, card->info.blkt.inter_packet); +} + +static ssize_t qeth_dev_blkt_inter_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + return qeth_dev_blkt_store(card, buf, count, + &card->info.blkt.inter_packet, 100); +} + +static DEVICE_ATTR(inter, 0644, qeth_dev_blkt_inter_show, + qeth_dev_blkt_inter_store); + +static ssize_t qeth_dev_blkt_inter_jumbo_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + return qeth_dev_blkt_show(buf, card, + card->info.blkt.inter_packet_jumbo); +} + +static ssize_t qeth_dev_blkt_inter_jumbo_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + return qeth_dev_blkt_store(card, buf, count, + &card->info.blkt.inter_packet_jumbo, 100); +} + +static DEVICE_ATTR(inter_jumbo, 0644, qeth_dev_blkt_inter_jumbo_show, + qeth_dev_blkt_inter_jumbo_store); + +static struct attribute *qeth_blkt_device_attrs[] = { + &dev_attr_total.attr, + &dev_attr_inter.attr, + &dev_attr_inter_jumbo.attr, + NULL, +}; + +static struct attribute_group qeth_device_blkt_group = { + .name = "blkt", + .attrs = qeth_blkt_device_attrs, +}; + +static struct attribute *qeth_device_attrs[] = { + &dev_attr_state.attr, + &dev_attr_chpid.attr, + &dev_attr_if_name.attr, + &dev_attr_card_type.attr, + &dev_attr_inbuf_size.attr, + &dev_attr_portno.attr, + &dev_attr_portname.attr, + &dev_attr_priority_queueing.attr, + &dev_attr_buffer_count.attr, + &dev_attr_recover.attr, + &dev_attr_performance_stats.attr, + &dev_attr_layer2.attr, + &dev_attr_large_send.attr, + NULL, +}; + +static struct attribute_group qeth_device_attr_group = { + .attrs = qeth_device_attrs, +}; + +static struct attribute *qeth_osn_device_attrs[] = { + &dev_attr_state.attr, + &dev_attr_chpid.attr, + &dev_attr_if_name.attr, + &dev_attr_card_type.attr, + &dev_attr_buffer_count.attr, + &dev_attr_recover.attr, + NULL, +}; + +static struct attribute_group qeth_osn_device_attr_group = { + .attrs = qeth_osn_device_attrs, +}; + +int qeth_core_create_device_attributes(struct device *dev) +{ + int ret; + ret = sysfs_create_group(&dev->kobj, &qeth_device_attr_group); + if (ret) + return ret; + ret = sysfs_create_group(&dev->kobj, &qeth_device_blkt_group); + if (ret) + sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); + + return 0; +} + +void qeth_core_remove_device_attributes(struct device *dev) +{ + sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); + sysfs_remove_group(&dev->kobj, &qeth_device_blkt_group); +} + +int qeth_core_create_osn_attributes(struct device *dev) +{ + return sysfs_create_group(&dev->kobj, &qeth_osn_device_attr_group); +} + +void qeth_core_remove_osn_attributes(struct device *dev) +{ + sysfs_remove_group(&dev->kobj, &qeth_osn_device_attr_group); + return; +} diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c new file mode 100644 index 000000000000..4417a3629ae0 --- /dev/null +++ b/drivers/s390/net/qeth_l2_main.c @@ -0,0 +1,1242 @@ +/* + * drivers/s390/net/qeth_l2_main.c + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "qeth_core.h" +#include "qeth_core_offl.h" + +#define QETH_DBF_TEXT_(name, level, text...) \ + do { \ + if (qeth_dbf_passes(qeth_dbf_##name, level)) { \ + char *dbf_txt_buf = get_cpu_var(qeth_l2_dbf_txt_buf); \ + sprintf(dbf_txt_buf, text); \ + debug_text_event(qeth_dbf_##name, level, dbf_txt_buf); \ + put_cpu_var(qeth_l2_dbf_txt_buf); \ + } \ + } while (0) + +static DEFINE_PER_CPU(char[256], qeth_l2_dbf_txt_buf); + +static int qeth_l2_set_offline(struct ccwgroup_device *); +static int qeth_l2_stop(struct net_device *); +static int qeth_l2_send_delmac(struct qeth_card *, __u8 *); +static int qeth_l2_send_setdelmac(struct qeth_card *, __u8 *, + enum qeth_ipa_cmds, + int (*reply_cb) (struct qeth_card *, + struct qeth_reply*, + unsigned long)); +static void qeth_l2_set_multicast_list(struct net_device *); +static int qeth_l2_recover(void *); + +static int qeth_l2_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) +{ + struct qeth_card *card = netdev_priv(dev); + struct mii_ioctl_data *mii_data; + int rc = 0; + + if (!card) + return -ENODEV; + + if ((card->state != CARD_STATE_UP) && + (card->state != CARD_STATE_SOFTSETUP)) + return -ENODEV; + + if (card->info.type == QETH_CARD_TYPE_OSN) + return -EPERM; + + switch (cmd) { + case SIOC_QETH_ADP_SET_SNMP_CONTROL: + rc = qeth_snmp_command(card, rq->ifr_ifru.ifru_data); + break; + case SIOC_QETH_GET_CARD_TYPE: + if ((card->info.type == QETH_CARD_TYPE_OSAE) && + !card->info.guestlan) + return 1; + return 0; + break; + case SIOCGMIIPHY: + mii_data = if_mii(rq); + mii_data->phy_id = 0; + break; + case SIOCGMIIREG: + mii_data = if_mii(rq); + if (mii_data->phy_id != 0) + rc = -EINVAL; + else + mii_data->val_out = qeth_mdio_read(dev, + mii_data->phy_id, mii_data->reg_num); + break; + default: + rc = -EOPNOTSUPP; + } + if (rc) + QETH_DBF_TEXT_(trace, 2, "ioce%d", rc); + return rc; +} + +static int qeth_l2_verify_dev(struct net_device *dev) +{ + struct qeth_card *card; + unsigned long flags; + int rc = 0; + + read_lock_irqsave(&qeth_core_card_list.rwlock, flags); + list_for_each_entry(card, &qeth_core_card_list.list, list) { + if (card->dev == dev) { + rc = QETH_REAL_CARD; + break; + } + } + read_unlock_irqrestore(&qeth_core_card_list.rwlock, flags); + + return rc; +} + +static struct net_device *qeth_l2_netdev_by_devno(unsigned char *read_dev_no) +{ + struct qeth_card *card; + struct net_device *ndev; + unsigned char *readno; + __u16 temp_dev_no, card_dev_no; + char *endp; + unsigned long flags; + + ndev = NULL; + memcpy(&temp_dev_no, read_dev_no, 2); + read_lock_irqsave(&qeth_core_card_list.rwlock, flags); + list_for_each_entry(card, &qeth_core_card_list.list, list) { + readno = CARD_RDEV_ID(card); + readno += (strlen(readno) - 4); + card_dev_no = simple_strtoul(readno, &endp, 16); + if (card_dev_no == temp_dev_no) { + ndev = card->dev; + break; + } + } + read_unlock_irqrestore(&qeth_core_card_list.rwlock, flags); + return ndev; +} + +static int qeth_l2_send_setgroupmac_cb(struct qeth_card *card, + struct qeth_reply *reply, + unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + __u8 *mac; + + QETH_DBF_TEXT(trace, 2, "L2Sgmacb"); + cmd = (struct qeth_ipa_cmd *) data; + mac = &cmd->data.setdelmac.mac[0]; + /* MAC already registered, needed in couple/uncouple case */ + if (cmd->hdr.return_code == 0x2005) { + PRINT_WARN("Group MAC %02x:%02x:%02x:%02x:%02x:%02x " \ + "already existing on %s \n", + mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], + QETH_CARD_IFNAME(card)); + cmd->hdr.return_code = 0; + } + if (cmd->hdr.return_code) + PRINT_ERR("Could not set group MAC " \ + "%02x:%02x:%02x:%02x:%02x:%02x on %s: %x\n", + mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], + QETH_CARD_IFNAME(card), cmd->hdr.return_code); + return 0; +} + +static int qeth_l2_send_setgroupmac(struct qeth_card *card, __u8 *mac) +{ + QETH_DBF_TEXT(trace, 2, "L2Sgmac"); + return qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETGMAC, + qeth_l2_send_setgroupmac_cb); +} + +static int qeth_l2_send_delgroupmac_cb(struct qeth_card *card, + struct qeth_reply *reply, + unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + __u8 *mac; + + QETH_DBF_TEXT(trace, 2, "L2Dgmacb"); + cmd = (struct qeth_ipa_cmd *) data; + mac = &cmd->data.setdelmac.mac[0]; + if (cmd->hdr.return_code) + PRINT_ERR("Could not delete group MAC " \ + "%02x:%02x:%02x:%02x:%02x:%02x on %s: %x\n", + mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], + QETH_CARD_IFNAME(card), cmd->hdr.return_code); + return 0; +} + +static int qeth_l2_send_delgroupmac(struct qeth_card *card, __u8 *mac) +{ + QETH_DBF_TEXT(trace, 2, "L2Dgmac"); + return qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELGMAC, + qeth_l2_send_delgroupmac_cb); +} + +static void qeth_l2_add_mc(struct qeth_card *card, __u8 *mac) +{ + struct qeth_mc_mac *mc; + + mc = kmalloc(sizeof(struct qeth_mc_mac), GFP_ATOMIC); + + if (!mc) { + PRINT_ERR("no mem vor mc mac address\n"); + return; + } + + memcpy(mc->mc_addr, mac, OSA_ADDR_LEN); + mc->mc_addrlen = OSA_ADDR_LEN; + + if (!qeth_l2_send_setgroupmac(card, mac)) + list_add_tail(&mc->list, &card->mc_list); + else + kfree(mc); +} + +static void qeth_l2_del_all_mc(struct qeth_card *card) +{ + struct qeth_mc_mac *mc, *tmp; + + spin_lock_bh(&card->mclock); + list_for_each_entry_safe(mc, tmp, &card->mc_list, list) { + qeth_l2_send_delgroupmac(card, mc->mc_addr); + list_del(&mc->list); + kfree(mc); + } + spin_unlock_bh(&card->mclock); +} + +static void qeth_l2_get_packet_type(struct qeth_card *card, + struct qeth_hdr *hdr, struct sk_buff *skb) +{ + __u16 hdr_mac; + + if (!memcmp(skb->data + QETH_HEADER_SIZE, + skb->dev->broadcast, 6)) { + /* broadcast? */ + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_BROADCAST; + return; + } + hdr_mac = *((__u16 *)skb->data); + /* tr multicast? */ + switch (card->info.link_type) { + case QETH_LINK_TYPE_HSTR: + case QETH_LINK_TYPE_LANE_TR: + if ((hdr_mac == QETH_TR_MAC_NC) || + (hdr_mac == QETH_TR_MAC_C)) + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_MULTICAST; + else + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_UNICAST; + break; + /* eth or so multicast? */ + default: + if ((hdr_mac == QETH_ETH_MAC_V4) || + (hdr_mac == QETH_ETH_MAC_V6)) + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_MULTICAST; + else + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_UNICAST; + } +} + +static void qeth_l2_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, + struct sk_buff *skb, int ipv, int cast_type) +{ + struct vlan_ethhdr *veth = (struct vlan_ethhdr *)((skb->data) + + QETH_HEADER_SIZE); + + memset(hdr, 0, sizeof(struct qeth_hdr)); + hdr->hdr.l2.id = QETH_HEADER_TYPE_LAYER2; + + /* set byte byte 3 to casting flags */ + if (cast_type == RTN_MULTICAST) + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_MULTICAST; + else if (cast_type == RTN_BROADCAST) + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_BROADCAST; + else + qeth_l2_get_packet_type(card, hdr, skb); + + hdr->hdr.l2.pkt_length = skb->len-QETH_HEADER_SIZE; + /* VSWITCH relies on the VLAN + * information to be present in + * the QDIO header */ + if (veth->h_vlan_proto == __constant_htons(ETH_P_8021Q)) { + hdr->hdr.l2.flags[2] |= QETH_LAYER2_FLAG_VLAN; + hdr->hdr.l2.vlan_id = ntohs(veth->h_vlan_TCI); + } +} + +static int qeth_l2_send_setdelvlan_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 2, "L2sdvcb"); + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code) { + PRINT_ERR("Error in processing VLAN %i on %s: 0x%x. " + "Continuing\n", cmd->data.setdelvlan.vlan_id, + QETH_CARD_IFNAME(card), cmd->hdr.return_code); + QETH_DBF_TEXT_(trace, 2, "L2VL%4x", cmd->hdr.command); + QETH_DBF_TEXT_(trace, 2, "L2%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(trace, 2, "err%d", cmd->hdr.return_code); + } + return 0; +} + +static int qeth_l2_send_setdelvlan(struct qeth_card *card, __u16 i, + enum qeth_ipa_cmds ipacmd) +{ + struct qeth_ipa_cmd *cmd; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT_(trace, 4, "L2sdv%x", ipacmd); + iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setdelvlan.vlan_id = i; + return qeth_send_ipa_cmd(card, iob, + qeth_l2_send_setdelvlan_cb, NULL); +} + +static void qeth_l2_process_vlans(struct qeth_card *card, int clear) +{ + struct qeth_vlan_vid *id; + QETH_DBF_TEXT(trace, 3, "L2prcvln"); + spin_lock_bh(&card->vlanlock); + list_for_each_entry(id, &card->vid_list, list) { + if (clear) + qeth_l2_send_setdelvlan(card, id->vid, + IPA_CMD_DELVLAN); + else + qeth_l2_send_setdelvlan(card, id->vid, + IPA_CMD_SETVLAN); + } + spin_unlock_bh(&card->vlanlock); +} + +static void qeth_l2_vlan_rx_add_vid(struct net_device *dev, unsigned short vid) +{ + struct qeth_card *card = netdev_priv(dev); + struct qeth_vlan_vid *id; + + QETH_DBF_TEXT_(trace, 4, "aid:%d", vid); + id = kmalloc(sizeof(struct qeth_vlan_vid), GFP_ATOMIC); + if (id) { + id->vid = vid; + qeth_l2_send_setdelvlan(card, vid, IPA_CMD_SETVLAN); + spin_lock_bh(&card->vlanlock); + list_add_tail(&id->list, &card->vid_list); + spin_unlock_bh(&card->vlanlock); + } else { + PRINT_ERR("no memory for vid\n"); + } +} + +static void qeth_l2_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) +{ + struct qeth_vlan_vid *id, *tmpid = NULL; + struct qeth_card *card = netdev_priv(dev); + + QETH_DBF_TEXT_(trace, 4, "kid:%d", vid); + spin_lock_bh(&card->vlanlock); + list_for_each_entry(id, &card->vid_list, list) { + if (id->vid == vid) { + list_del(&id->list); + tmpid = id; + break; + } + } + spin_unlock_bh(&card->vlanlock); + if (tmpid) { + qeth_l2_send_setdelvlan(card, vid, IPA_CMD_DELVLAN); + kfree(tmpid); + } + qeth_l2_set_multicast_list(card->dev); +} + +static int qeth_l2_stop_card(struct qeth_card *card, int recovery_mode) +{ + int rc = 0; + + QETH_DBF_TEXT(setup , 2, "stopcard"); + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + + qeth_set_allowed_threads(card, 0, 1); + if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) + return -ERESTARTSYS; + if (card->read.state == CH_STATE_UP && + card->write.state == CH_STATE_UP && + (card->state == CARD_STATE_UP)) { + if (recovery_mode && + card->info.type != QETH_CARD_TYPE_OSN) { + qeth_l2_stop(card->dev); + } else { + rtnl_lock(); + dev_close(card->dev); + rtnl_unlock(); + } + if (!card->use_hard_stop) { + __u8 *mac = &card->dev->dev_addr[0]; + rc = qeth_l2_send_delmac(card, mac); + QETH_DBF_TEXT_(setup, 2, "Lerr%d", rc); + } + card->state = CARD_STATE_SOFTSETUP; + } + if (card->state == CARD_STATE_SOFTSETUP) { + qeth_l2_process_vlans(card, 1); + qeth_l2_del_all_mc(card); + qeth_clear_ipacmd_list(card); + card->state = CARD_STATE_HARDSETUP; + } + if (card->state == CARD_STATE_HARDSETUP) { + qeth_qdio_clear_card(card, 0); + qeth_clear_qdio_buffers(card); + qeth_clear_working_pool_list(card); + card->state = CARD_STATE_DOWN; + } + if (card->state == CARD_STATE_DOWN) { + qeth_clear_cmd_buffers(&card->read); + qeth_clear_cmd_buffers(&card->write); + } + card->use_hard_stop = 0; + return rc; +} + +static void qeth_l2_process_inbound_buffer(struct qeth_card *card, + struct qeth_qdio_buffer *buf, int index) +{ + struct qdio_buffer_element *element; + struct sk_buff *skb; + struct qeth_hdr *hdr; + int offset; + unsigned int len; + + /* get first element of current buffer */ + element = (struct qdio_buffer_element *)&buf->buffer->element[0]; + offset = 0; + if (card->options.performance_stats) + card->perf_stats.bufs_rec++; + while ((skb = qeth_core_get_next_skb(card, buf->buffer, &element, + &offset, &hdr))) { + skb->dev = card->dev; + /* is device UP ? */ + if (!(card->dev->flags & IFF_UP)) { + dev_kfree_skb_any(skb); + continue; + } + + switch (hdr->hdr.l2.id) { + case QETH_HEADER_TYPE_LAYER2: + skb->pkt_type = PACKET_HOST; + skb->protocol = eth_type_trans(skb, skb->dev); + if (card->options.checksum_type == NO_CHECKSUMMING) + skb->ip_summed = CHECKSUM_UNNECESSARY; + else + skb->ip_summed = CHECKSUM_NONE; + *((__u32 *)skb->cb) = ++card->seqno.pkt_seqno; + len = skb->len; + netif_rx(skb); + break; + case QETH_HEADER_TYPE_OSN: + skb_push(skb, sizeof(struct qeth_hdr)); + skb_copy_to_linear_data(skb, hdr, + sizeof(struct qeth_hdr)); + len = skb->len; + card->osn_info.data_cb(skb); + break; + default: + dev_kfree_skb_any(skb); + QETH_DBF_TEXT(trace, 3, "inbunkno"); + QETH_DBF_HEX(control, 3, hdr, QETH_DBF_CONTROL_LEN); + continue; + } + card->dev->last_rx = jiffies; + card->stats.rx_packets++; + card->stats.rx_bytes += len; + } +} + +static int qeth_l2_send_setdelmac(struct qeth_card *card, __u8 *mac, + enum qeth_ipa_cmds ipacmd, + int (*reply_cb) (struct qeth_card *, + struct qeth_reply*, + unsigned long)) +{ + struct qeth_ipa_cmd *cmd; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(trace, 2, "L2sdmac"); + iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setdelmac.mac_length = OSA_ADDR_LEN; + memcpy(&cmd->data.setdelmac.mac, mac, OSA_ADDR_LEN); + return qeth_send_ipa_cmd(card, iob, reply_cb, NULL); +} + +static int qeth_l2_send_setmac_cb(struct qeth_card *card, + struct qeth_reply *reply, + unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 2, "L2Smaccb"); + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code) { + QETH_DBF_TEXT_(trace, 2, "L2er%x", cmd->hdr.return_code); + card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; + cmd->hdr.return_code = -EIO; + } else { + card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; + memcpy(card->dev->dev_addr, cmd->data.setdelmac.mac, + OSA_ADDR_LEN); + PRINT_INFO("MAC address %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x " + "successfully registered on device %s\n", + card->dev->dev_addr[0], card->dev->dev_addr[1], + card->dev->dev_addr[2], card->dev->dev_addr[3], + card->dev->dev_addr[4], card->dev->dev_addr[5], + card->dev->name); + } + return 0; +} + +static int qeth_l2_send_setmac(struct qeth_card *card, __u8 *mac) +{ + QETH_DBF_TEXT(trace, 2, "L2Setmac"); + return qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC, + qeth_l2_send_setmac_cb); +} + +static int qeth_l2_send_delmac_cb(struct qeth_card *card, + struct qeth_reply *reply, + unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 2, "L2Dmaccb"); + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code) { + QETH_DBF_TEXT_(trace, 2, "err%d", cmd->hdr.return_code); + cmd->hdr.return_code = -EIO; + return 0; + } + card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; + + return 0; +} + +static int qeth_l2_send_delmac(struct qeth_card *card, __u8 *mac) +{ + QETH_DBF_TEXT(trace, 2, "L2Delmac"); + if (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) + return 0; + return qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELVMAC, + qeth_l2_send_delmac_cb); +} + +static int qeth_l2_request_initial_mac(struct qeth_card *card) +{ + int rc = 0; + char vendor_pre[] = {0x02, 0x00, 0x00}; + + QETH_DBF_TEXT(setup, 2, "doL2init"); + QETH_DBF_TEXT_(setup, 2, "doL2%s", CARD_BUS_ID(card)); + + rc = qeth_query_setadapterparms(card); + if (rc) { + PRINT_WARN("could not query adapter parameters on device %s: " + "x%x\n", CARD_BUS_ID(card), rc); + } + + if (card->info.guestlan) { + rc = qeth_setadpparms_change_macaddr(card); + if (rc) { + PRINT_WARN("couldn't get MAC address on " + "device %s: x%x\n", + CARD_BUS_ID(card), rc); + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return rc; + } + QETH_DBF_HEX(setup, 2, card->dev->dev_addr, OSA_ADDR_LEN); + } else { + random_ether_addr(card->dev->dev_addr); + memcpy(card->dev->dev_addr, vendor_pre, 3); + } + return 0; +} + +static int qeth_l2_set_mac_address(struct net_device *dev, void *p) +{ + struct sockaddr *addr = p; + struct qeth_card *card = netdev_priv(dev); + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "setmac"); + + if (qeth_l2_verify_dev(dev) != QETH_REAL_CARD) { + QETH_DBF_TEXT(trace, 3, "setmcINV"); + return -EOPNOTSUPP; + } + + if (card->info.type == QETH_CARD_TYPE_OSN) { + PRINT_WARN("Setting MAC address on %s is not supported.\n", + dev->name); + QETH_DBF_TEXT(trace, 3, "setmcOSN"); + return -EOPNOTSUPP; + } + QETH_DBF_TEXT_(trace, 3, "%s", CARD_BUS_ID(card)); + QETH_DBF_HEX(trace, 3, addr->sa_data, OSA_ADDR_LEN); + rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]); + if (!rc) + rc = qeth_l2_send_setmac(card, addr->sa_data); + return rc; +} + +static void qeth_l2_set_multicast_list(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + struct dev_mc_list *dm; + + if (card->info.type == QETH_CARD_TYPE_OSN) + return ; + + QETH_DBF_TEXT(trace, 3, "setmulti"); + qeth_l2_del_all_mc(card); + spin_lock_bh(&card->mclock); + for (dm = dev->mc_list; dm; dm = dm->next) + qeth_l2_add_mc(card, dm->dmi_addr); + spin_unlock_bh(&card->mclock); + if (!qeth_adp_supported(card, IPA_SETADP_SET_PROMISC_MODE)) + return; + qeth_setadp_promisc_mode(card); +} + +static int qeth_l2_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + int rc; + struct qeth_hdr *hdr = NULL; + int elements = 0; + struct qeth_card *card = netdev_priv(dev); + struct sk_buff *new_skb = skb; + int ipv = qeth_get_ip_version(skb); + int cast_type = qeth_get_cast_type(card, skb); + struct qeth_qdio_out_q *queue = card->qdio.out_qs + [qeth_get_priority_queue(card, skb, ipv, cast_type)]; + int tx_bytes = skb->len; + enum qeth_large_send_types large_send = QETH_LARGE_SEND_NO; + struct qeth_eddp_context *ctx = NULL; + + QETH_DBF_TEXT(trace, 6, "l2xmit"); + + if ((card->state != CARD_STATE_UP) || !card->lan_online) { + card->stats.tx_carrier_errors++; + goto tx_drop; + } + + if ((card->info.type == QETH_CARD_TYPE_OSN) && + (skb->protocol == htons(ETH_P_IPV6))) + goto tx_drop; + + if (card->options.performance_stats) { + card->perf_stats.outbound_cnt++; + card->perf_stats.outbound_start_time = qeth_get_micros(); + } + netif_stop_queue(dev); + + if (skb_is_gso(skb)) + large_send = QETH_LARGE_SEND_EDDP; + + if (card->info.type == QETH_CARD_TYPE_OSN) + hdr = (struct qeth_hdr *)skb->data; + else { + new_skb = qeth_prepare_skb(card, skb, &hdr); + if (!new_skb) + goto tx_drop; + qeth_l2_fill_header(card, hdr, new_skb, ipv, cast_type); + } + + if (large_send == QETH_LARGE_SEND_EDDP) { + ctx = qeth_eddp_create_context(card, new_skb, hdr, + skb->sk->sk_protocol); + if (ctx == NULL) { + PRINT_WARN("could not create eddp context\n"); + goto tx_drop; + } + } else { + elements = qeth_get_elements_no(card, (void *)hdr, new_skb, 0); + if (!elements) + goto tx_drop; + } + + if ((large_send == QETH_LARGE_SEND_NO) && + (skb->ip_summed == CHECKSUM_PARTIAL)) + qeth_tx_csum(new_skb); + + if (card->info.type != QETH_CARD_TYPE_IQD) + rc = qeth_do_send_packet(card, queue, new_skb, hdr, + elements, ctx); + else + rc = qeth_do_send_packet_fast(card, queue, new_skb, hdr, + elements, ctx); + if (!rc) { + card->stats.tx_packets++; + card->stats.tx_bytes += tx_bytes; + if (new_skb != skb) + dev_kfree_skb_any(skb); + if (card->options.performance_stats) { + if (large_send != QETH_LARGE_SEND_NO) { + card->perf_stats.large_send_bytes += tx_bytes; + card->perf_stats.large_send_cnt++; + } + if (skb_shinfo(new_skb)->nr_frags > 0) { + card->perf_stats.sg_skbs_sent++; + /* nr_frags + skb->data */ + card->perf_stats.sg_frags_sent += + skb_shinfo(new_skb)->nr_frags + 1; + } + } + + if (ctx != NULL) { + qeth_eddp_put_context(ctx); + dev_kfree_skb_any(new_skb); + } + } else { + if (ctx != NULL) + qeth_eddp_put_context(ctx); + + if (rc == -EBUSY) { + if (new_skb != skb) + dev_kfree_skb_any(new_skb); + return NETDEV_TX_BUSY; + } else + goto tx_drop; + } + + netif_wake_queue(dev); + if (card->options.performance_stats) + card->perf_stats.outbound_time += qeth_get_micros() - + card->perf_stats.outbound_start_time; + return rc; + +tx_drop: + card->stats.tx_dropped++; + card->stats.tx_errors++; + if ((new_skb != skb) && new_skb) + dev_kfree_skb_any(new_skb); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +static void qeth_l2_qdio_input_handler(struct ccw_device *ccwdev, + unsigned int status, unsigned int qdio_err, + unsigned int siga_err, unsigned int queue, + int first_element, int count, unsigned long card_ptr) +{ + struct net_device *net_dev; + struct qeth_card *card; + struct qeth_qdio_buffer *buffer; + int index; + int i; + + QETH_DBF_TEXT(trace, 6, "qdinput"); + card = (struct qeth_card *) card_ptr; + net_dev = card->dev; + if (card->options.performance_stats) { + card->perf_stats.inbound_cnt++; + card->perf_stats.inbound_start_time = qeth_get_micros(); + } + if (status & QDIO_STATUS_LOOK_FOR_ERROR) { + if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION) { + QETH_DBF_TEXT(trace, 1, "qdinchk"); + QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(trace, 1, "%04X%04X", first_element, + count); + QETH_DBF_TEXT_(trace, 1, "%04X%04X", queue, status); + qeth_schedule_recovery(card); + return; + } + } + for (i = first_element; i < (first_element + count); ++i) { + index = i % QDIO_MAX_BUFFERS_PER_Q; + buffer = &card->qdio.in_q->bufs[index]; + if (!((status & QDIO_STATUS_LOOK_FOR_ERROR) && + qeth_check_qdio_errors(buffer->buffer, + qdio_err, siga_err, "qinerr"))) + qeth_l2_process_inbound_buffer(card, buffer, index); + /* clear buffer and give back to hardware */ + qeth_put_buffer_pool_entry(card, buffer->pool_entry); + qeth_queue_input_buffer(card, index); + } + if (card->options.performance_stats) + card->perf_stats.inbound_time += qeth_get_micros() - + card->perf_stats.inbound_start_time; +} + +static int qeth_l2_open(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 4, "qethopen"); + if (card->state != CARD_STATE_SOFTSETUP) + return -ENODEV; + + if ((card->info.type != QETH_CARD_TYPE_OSN) && + (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))) { + QETH_DBF_TEXT(trace, 4, "nomacadr"); + return -EPERM; + } + card->data.state = CH_STATE_UP; + card->state = CARD_STATE_UP; + card->dev->flags |= IFF_UP; + netif_start_queue(dev); + + if (!card->lan_online && netif_carrier_ok(dev)) + netif_carrier_off(dev); + return 0; +} + + +static int qeth_l2_stop(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 4, "qethstop"); + netif_tx_disable(dev); + card->dev->flags &= ~IFF_UP; + if (card->state == CARD_STATE_UP) + card->state = CARD_STATE_SOFTSETUP; + return 0; +} + +static int qeth_l2_probe_device(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + + INIT_LIST_HEAD(&card->vid_list); + INIT_LIST_HEAD(&card->mc_list); + card->options.layer2 = 1; + card->discipline.input_handler = (qdio_handler_t *) + qeth_l2_qdio_input_handler; + card->discipline.output_handler = (qdio_handler_t *) + qeth_qdio_output_handler; + card->discipline.recover = qeth_l2_recover; + return 0; +} + +static void qeth_l2_remove_device(struct ccwgroup_device *cgdev) +{ + struct qeth_card *card = dev_get_drvdata(&cgdev->dev); + + wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0); + + if (cgdev->state == CCWGROUP_ONLINE) { + card->use_hard_stop = 1; + qeth_l2_set_offline(cgdev); + } + + if (card->dev) { + unregister_netdev(card->dev); + card->dev = NULL; + } + + qeth_l2_del_all_mc(card); + return; +} + +static struct ethtool_ops qeth_l2_ethtool_ops = { + .get_link = ethtool_op_get_link, + .get_tx_csum = ethtool_op_get_tx_csum, + .set_tx_csum = ethtool_op_set_tx_hw_csum, + .get_sg = ethtool_op_get_sg, + .set_sg = ethtool_op_set_sg, + .get_tso = ethtool_op_get_tso, + .set_tso = ethtool_op_set_tso, + .get_strings = qeth_core_get_strings, + .get_ethtool_stats = qeth_core_get_ethtool_stats, + .get_stats_count = qeth_core_get_stats_count, + .get_drvinfo = qeth_core_get_drvinfo, +}; + +static struct ethtool_ops qeth_l2_osn_ops = { + .get_strings = qeth_core_get_strings, + .get_ethtool_stats = qeth_core_get_ethtool_stats, + .get_stats_count = qeth_core_get_stats_count, + .get_drvinfo = qeth_core_get_drvinfo, +}; + +static int qeth_l2_setup_netdev(struct qeth_card *card) +{ + switch (card->info.type) { + case QETH_CARD_TYPE_OSAE: + card->dev = alloc_etherdev(0); + break; + case QETH_CARD_TYPE_IQD: + card->dev = alloc_netdev(0, "hsi%d", ether_setup); + break; + case QETH_CARD_TYPE_OSN: + card->dev = alloc_netdev(0, "osn%d", ether_setup); + card->dev->flags |= IFF_NOARP; + break; + default: + card->dev = alloc_etherdev(0); + } + + if (!card->dev) + return -ENODEV; + + card->dev->priv = card; + card->dev->tx_timeout = &qeth_tx_timeout; + card->dev->watchdog_timeo = QETH_TX_TIMEOUT; + card->dev->open = qeth_l2_open; + card->dev->stop = qeth_l2_stop; + card->dev->hard_start_xmit = qeth_l2_hard_start_xmit; + card->dev->do_ioctl = qeth_l2_do_ioctl; + card->dev->get_stats = qeth_get_stats; + card->dev->change_mtu = qeth_change_mtu; + card->dev->set_multicast_list = qeth_l2_set_multicast_list; + card->dev->vlan_rx_kill_vid = qeth_l2_vlan_rx_kill_vid; + card->dev->vlan_rx_add_vid = qeth_l2_vlan_rx_add_vid; + card->dev->set_mac_address = qeth_l2_set_mac_address; + card->dev->mtu = card->info.initial_mtu; + if (card->info.type != QETH_CARD_TYPE_OSN) + SET_ETHTOOL_OPS(card->dev, &qeth_l2_ethtool_ops); + else + SET_ETHTOOL_OPS(card->dev, &qeth_l2_osn_ops); + card->dev->features |= NETIF_F_HW_VLAN_FILTER; + card->info.broadcast_capable = 1; + qeth_l2_request_initial_mac(card); + SET_NETDEV_DEV(card->dev, &card->gdev->dev); + return register_netdev(card->dev); +} + +static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + int rc = 0; + enum qeth_card_states recover_flag; + + BUG_ON(!card); + QETH_DBF_TEXT(setup, 2, "setonlin"); + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + + qeth_set_allowed_threads(card, QETH_RECOVER_THREAD, 1); + if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) { + PRINT_WARN("set_online of card %s interrupted by user!\n", + CARD_BUS_ID(card)); + return -ERESTARTSYS; + } + + recover_flag = card->state; + rc = ccw_device_set_online(CARD_RDEV(card)); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return -EIO; + } + rc = ccw_device_set_online(CARD_WDEV(card)); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return -EIO; + } + rc = ccw_device_set_online(CARD_DDEV(card)); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return -EIO; + } + + rc = qeth_core_hardsetup_card(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + goto out_remove; + } + + if (!card->dev && qeth_l2_setup_netdev(card)) + goto out_remove; + + if (card->info.type != QETH_CARD_TYPE_OSN) + qeth_l2_send_setmac(card, &card->dev->dev_addr[0]); + + card->state = CARD_STATE_HARDSETUP; + qeth_print_status_message(card); + + /* softsetup */ + QETH_DBF_TEXT(setup, 2, "softsetp"); + + rc = qeth_send_startlan(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + if (rc == 0xe080) { + PRINT_WARN("LAN on card %s if offline! " + "Waiting for STARTLAN from card.\n", + CARD_BUS_ID(card)); + card->lan_online = 0; + } + return rc; + } else + card->lan_online = 1; + + if (card->info.type != QETH_CARD_TYPE_OSN) { + qeth_set_large_send(card, card->options.large_send); + qeth_l2_process_vlans(card, 0); + } + + netif_tx_disable(card->dev); + + rc = qeth_init_qdio_queues(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "6err%d", rc); + goto out_remove; + } + card->state = CARD_STATE_SOFTSETUP; + netif_carrier_on(card->dev); + + qeth_set_allowed_threads(card, 0xffffffff, 0); + if (recover_flag == CARD_STATE_RECOVER) { + if (recovery_mode && + card->info.type != QETH_CARD_TYPE_OSN) { + qeth_l2_open(card->dev); + } else { + rtnl_lock(); + dev_open(card->dev); + rtnl_unlock(); + } + /* this also sets saved unicast addresses */ + qeth_l2_set_multicast_list(card->dev); + } + /* let user_space know that device is online */ + kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE); + return 0; +out_remove: + card->use_hard_stop = 1; + qeth_l2_stop_card(card, 0); + ccw_device_set_offline(CARD_DDEV(card)); + ccw_device_set_offline(CARD_WDEV(card)); + ccw_device_set_offline(CARD_RDEV(card)); + if (recover_flag == CARD_STATE_RECOVER) + card->state = CARD_STATE_RECOVER; + else + card->state = CARD_STATE_DOWN; + return -ENODEV; +} + +static int qeth_l2_set_online(struct ccwgroup_device *gdev) +{ + return __qeth_l2_set_online(gdev, 0); +} + +static int __qeth_l2_set_offline(struct ccwgroup_device *cgdev, + int recovery_mode) +{ + struct qeth_card *card = dev_get_drvdata(&cgdev->dev); + int rc = 0, rc2 = 0, rc3 = 0; + enum qeth_card_states recover_flag; + + QETH_DBF_TEXT(setup, 3, "setoffl"); + QETH_DBF_HEX(setup, 3, &card, sizeof(void *)); + + if (card->dev && netif_carrier_ok(card->dev)) + netif_carrier_off(card->dev); + recover_flag = card->state; + if (qeth_l2_stop_card(card, recovery_mode) == -ERESTARTSYS) { + PRINT_WARN("Stopping card %s interrupted by user!\n", + CARD_BUS_ID(card)); + return -ERESTARTSYS; + } + rc = ccw_device_set_offline(CARD_DDEV(card)); + rc2 = ccw_device_set_offline(CARD_WDEV(card)); + rc3 = ccw_device_set_offline(CARD_RDEV(card)); + if (!rc) + rc = (rc2) ? rc2 : rc3; + if (rc) + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + if (recover_flag == CARD_STATE_UP) + card->state = CARD_STATE_RECOVER; + /* let user_space know that device is offline */ + kobject_uevent(&cgdev->dev.kobj, KOBJ_CHANGE); + return 0; +} + +static int qeth_l2_set_offline(struct ccwgroup_device *cgdev) +{ + return __qeth_l2_set_offline(cgdev, 0); +} + +static int qeth_l2_recover(void *ptr) +{ + struct qeth_card *card; + int rc = 0; + + card = (struct qeth_card *) ptr; + QETH_DBF_TEXT(trace, 2, "recover1"); + QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); + if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD)) + return 0; + QETH_DBF_TEXT(trace, 2, "recover2"); + PRINT_WARN("Recovery of device %s started ...\n", + CARD_BUS_ID(card)); + card->use_hard_stop = 1; + __qeth_l2_set_offline(card->gdev, 1); + rc = __qeth_l2_set_online(card->gdev, 1); + /* don't run another scheduled recovery */ + qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD); + qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD); + if (!rc) + PRINT_INFO("Device %s successfully recovered!\n", + CARD_BUS_ID(card)); + else + PRINT_INFO("Device %s could not be recovered!\n", + CARD_BUS_ID(card)); + return 0; +} + +static int __init qeth_l2_init(void) +{ + PRINT_INFO("register layer 2 discipline\n"); + return 0; +} + +static void __exit qeth_l2_exit(void) +{ + PRINT_INFO("unregister layer 2 discipline\n"); +} + +static void qeth_l2_shutdown(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + qeth_qdio_clear_card(card, 0); + qeth_clear_qdio_buffers(card); +} + +struct ccwgroup_driver qeth_l2_ccwgroup_driver = { + .probe = qeth_l2_probe_device, + .remove = qeth_l2_remove_device, + .set_online = qeth_l2_set_online, + .set_offline = qeth_l2_set_offline, + .shutdown = qeth_l2_shutdown, +}; +EXPORT_SYMBOL_GPL(qeth_l2_ccwgroup_driver); + +static int qeth_osn_send_control_data(struct qeth_card *card, int len, + struct qeth_cmd_buffer *iob) +{ + unsigned long flags; + int rc = 0; + + QETH_DBF_TEXT(trace, 5, "osndctrd"); + + wait_event(card->wait_q, + atomic_cmpxchg(&card->write.irq_pending, 0, 1) == 0); + qeth_prepare_control_data(card, len, iob); + QETH_DBF_TEXT(trace, 6, "osnoirqp"); + spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); + rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, + (addr_t) iob, 0, 0); + spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags); + if (rc) { + PRINT_WARN("qeth_osn_send_control_data: " + "ccw_device_start rc = %i\n", rc); + QETH_DBF_TEXT_(trace, 2, " err%d", rc); + qeth_release_buffer(iob->channel, iob); + atomic_set(&card->write.irq_pending, 0); + wake_up(&card->wait_q); + } + return rc; +} + +static int qeth_osn_send_ipa_cmd(struct qeth_card *card, + struct qeth_cmd_buffer *iob, int data_len) +{ + u16 s1, s2; + + QETH_DBF_TEXT(trace, 4, "osndipa"); + + qeth_prepare_ipa_cmd(card, iob, QETH_PROT_OSN2); + s1 = (u16)(IPA_PDU_HEADER_SIZE + data_len); + s2 = (u16)data_len; + memcpy(QETH_IPA_PDU_LEN_TOTAL(iob->data), &s1, 2); + memcpy(QETH_IPA_PDU_LEN_PDU1(iob->data), &s2, 2); + memcpy(QETH_IPA_PDU_LEN_PDU2(iob->data), &s2, 2); + memcpy(QETH_IPA_PDU_LEN_PDU3(iob->data), &s2, 2); + return qeth_osn_send_control_data(card, s1, iob); +} + +int qeth_osn_assist(struct net_device *dev, void *data, int data_len) +{ + struct qeth_cmd_buffer *iob; + struct qeth_card *card; + int rc; + + QETH_DBF_TEXT(trace, 2, "osnsdmc"); + if (!dev) + return -ENODEV; + card = netdev_priv(dev); + if (!card) + return -ENODEV; + if ((card->state != CARD_STATE_UP) && + (card->state != CARD_STATE_SOFTSETUP)) + return -ENODEV; + iob = qeth_wait_for_buffer(&card->write); + memcpy(iob->data+IPA_PDU_HEADER_SIZE, data, data_len); + rc = qeth_osn_send_ipa_cmd(card, iob, data_len); + return rc; +} +EXPORT_SYMBOL(qeth_osn_assist); + +int qeth_osn_register(unsigned char *read_dev_no, struct net_device **dev, + int (*assist_cb)(struct net_device *, void *), + int (*data_cb)(struct sk_buff *)) +{ + struct qeth_card *card; + + QETH_DBF_TEXT(trace, 2, "osnreg"); + *dev = qeth_l2_netdev_by_devno(read_dev_no); + if (*dev == NULL) + return -ENODEV; + card = netdev_priv(*dev); + if (!card) + return -ENODEV; + if ((assist_cb == NULL) || (data_cb == NULL)) + return -EINVAL; + card->osn_info.assist_cb = assist_cb; + card->osn_info.data_cb = data_cb; + return 0; +} +EXPORT_SYMBOL(qeth_osn_register); + +void qeth_osn_deregister(struct net_device *dev) +{ + struct qeth_card *card; + + QETH_DBF_TEXT(trace, 2, "osndereg"); + if (!dev) + return; + card = netdev_priv(dev); + if (!card) + return; + card->osn_info.assist_cb = NULL; + card->osn_info.data_cb = NULL; + return; +} +EXPORT_SYMBOL(qeth_osn_deregister); + +module_init(qeth_l2_init); +module_exit(qeth_l2_exit); +MODULE_AUTHOR("Frank Blaschka "); +MODULE_DESCRIPTION("qeth layer 2 discipline"); +MODULE_LICENSE("GPL"); diff --git a/drivers/s390/net/qeth_l3.h b/drivers/s390/net/qeth_l3.h new file mode 100644 index 000000000000..f639cc3af22b --- /dev/null +++ b/drivers/s390/net/qeth_l3.h @@ -0,0 +1,76 @@ +/* + * drivers/s390/net/qeth_l3.h + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#ifndef __QETH_L3_H__ +#define __QETH_L3_H__ + +#include "qeth_core.h" + +#define QETH_DBF_TEXT_(name, level, text...) \ + do { \ + if (qeth_dbf_passes(qeth_dbf_##name, level)) { \ + char *dbf_txt_buf = get_cpu_var(qeth_l3_dbf_txt_buf); \ + sprintf(dbf_txt_buf, text); \ + debug_text_event(qeth_dbf_##name, level, dbf_txt_buf); \ + put_cpu_var(qeth_l3_dbf_txt_buf); \ + } \ + } while (0) + +DECLARE_PER_CPU(char[256], qeth_l3_dbf_txt_buf); + +struct qeth_ipaddr { + struct list_head entry; + enum qeth_ip_types type; + enum qeth_ipa_setdelip_flags set_flags; + enum qeth_ipa_setdelip_flags del_flags; + int is_multicast; + int users; + enum qeth_prot_versions proto; + unsigned char mac[OSA_ADDR_LEN]; + union { + struct { + unsigned int addr; + unsigned int mask; + } a4; + struct { + struct in6_addr addr; + unsigned int pfxlen; + } a6; + } u; +}; + +struct qeth_ipato_entry { + struct list_head entry; + enum qeth_prot_versions proto; + char addr[16]; + int mask_bits; +}; + + +void qeth_l3_ipaddr4_to_string(const __u8 *, char *); +int qeth_l3_string_to_ipaddr4(const char *, __u8 *); +void qeth_l3_ipaddr6_to_string(const __u8 *, char *); +int qeth_l3_string_to_ipaddr6(const char *, __u8 *); +void qeth_l3_ipaddr_to_string(enum qeth_prot_versions, const __u8 *, char *); +int qeth_l3_string_to_ipaddr(const char *, enum qeth_prot_versions, __u8 *); +int qeth_l3_create_device_attributes(struct device *); +void qeth_l3_remove_device_attributes(struct device *); +int qeth_l3_setrouting_v4(struct qeth_card *); +int qeth_l3_setrouting_v6(struct qeth_card *); +int qeth_l3_add_ipato_entry(struct qeth_card *, struct qeth_ipato_entry *); +void qeth_l3_del_ipato_entry(struct qeth_card *, enum qeth_prot_versions, + u8 *, int); +int qeth_l3_add_vipa(struct qeth_card *, enum qeth_prot_versions, const u8 *); +void qeth_l3_del_vipa(struct qeth_card *, enum qeth_prot_versions, const u8 *); +int qeth_l3_add_rxip(struct qeth_card *, enum qeth_prot_versions, const u8 *); +void qeth_l3_del_rxip(struct qeth_card *card, enum qeth_prot_versions, + const u8 *); + +#endif /* __QETH_L3_H__ */ diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c new file mode 100644 index 000000000000..cfa199a9df00 --- /dev/null +++ b/drivers/s390/net/qeth_l3_main.c @@ -0,0 +1,3388 @@ +/* + * drivers/s390/net/qeth_l3_main.c + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include + +#include "qeth_l3.h" +#include "qeth_core_offl.h" + +DEFINE_PER_CPU(char[256], qeth_l3_dbf_txt_buf); + +static int qeth_l3_set_offline(struct ccwgroup_device *); +static int qeth_l3_recover(void *); +static int qeth_l3_stop(struct net_device *); +static void qeth_l3_set_multicast_list(struct net_device *); +static int qeth_l3_neigh_setup(struct net_device *, struct neigh_parms *); +static int qeth_l3_register_addr_entry(struct qeth_card *, + struct qeth_ipaddr *); +static int qeth_l3_deregister_addr_entry(struct qeth_card *, + struct qeth_ipaddr *); +static int __qeth_l3_set_online(struct ccwgroup_device *, int); +static int __qeth_l3_set_offline(struct ccwgroup_device *, int); + + +static int qeth_l3_isxdigit(char *buf) +{ + while (*buf) { + if (!isxdigit(*buf++)) + return 0; + } + return 1; +} + +void qeth_l3_ipaddr4_to_string(const __u8 *addr, char *buf) +{ + sprintf(buf, "%i.%i.%i.%i", addr[0], addr[1], addr[2], addr[3]); +} + +int qeth_l3_string_to_ipaddr4(const char *buf, __u8 *addr) +{ + int count = 0, rc = 0; + int in[4]; + char c; + + rc = sscanf(buf, "%u.%u.%u.%u%c", + &in[0], &in[1], &in[2], &in[3], &c); + if (rc != 4 && (rc != 5 || c != '\n')) + return -EINVAL; + for (count = 0; count < 4; count++) { + if (in[count] > 255) + return -EINVAL; + addr[count] = in[count]; + } + return 0; +} + +void qeth_l3_ipaddr6_to_string(const __u8 *addr, char *buf) +{ + sprintf(buf, "%02x%02x:%02x%02x:%02x%02x:%02x%02x" + ":%02x%02x:%02x%02x:%02x%02x:%02x%02x", + addr[0], addr[1], addr[2], addr[3], + addr[4], addr[5], addr[6], addr[7], + addr[8], addr[9], addr[10], addr[11], + addr[12], addr[13], addr[14], addr[15]); +} + +int qeth_l3_string_to_ipaddr6(const char *buf, __u8 *addr) +{ + const char *end, *end_tmp, *start; + __u16 *in; + char num[5]; + int num2, cnt, out, found, save_cnt; + unsigned short in_tmp[8] = {0, }; + + cnt = out = found = save_cnt = num2 = 0; + end = start = buf; + in = (__u16 *) addr; + memset(in, 0, 16); + while (*end) { + end = strchr(start, ':'); + if (end == NULL) { + end = buf + strlen(buf); + end_tmp = strchr(start, '\n'); + if (end_tmp != NULL) + end = end_tmp; + out = 1; + } + if ((end - start)) { + memset(num, 0, 5); + if ((end - start) > 4) + return -EINVAL; + memcpy(num, start, end - start); + if (!qeth_l3_isxdigit(num)) + return -EINVAL; + sscanf(start, "%x", &num2); + if (found) + in_tmp[save_cnt++] = num2; + else + in[cnt++] = num2; + if (out) + break; + } else { + if (found) + return -EINVAL; + found = 1; + } + start = ++end; + } + if (cnt + save_cnt > 8) + return -EINVAL; + cnt = 7; + while (save_cnt) + in[cnt--] = in_tmp[--save_cnt]; + return 0; +} + +void qeth_l3_ipaddr_to_string(enum qeth_prot_versions proto, const __u8 *addr, + char *buf) +{ + if (proto == QETH_PROT_IPV4) + qeth_l3_ipaddr4_to_string(addr, buf); + else if (proto == QETH_PROT_IPV6) + qeth_l3_ipaddr6_to_string(addr, buf); +} + +int qeth_l3_string_to_ipaddr(const char *buf, enum qeth_prot_versions proto, + __u8 *addr) +{ + if (proto == QETH_PROT_IPV4) + return qeth_l3_string_to_ipaddr4(buf, addr); + else if (proto == QETH_PROT_IPV6) + return qeth_l3_string_to_ipaddr6(buf, addr); + else + return -EINVAL; +} + +static void qeth_l3_convert_addr_to_bits(u8 *addr, u8 *bits, int len) +{ + int i, j; + u8 octet; + + for (i = 0; i < len; ++i) { + octet = addr[i]; + for (j = 7; j >= 0; --j) { + bits[i*8 + j] = octet & 1; + octet >>= 1; + } + } +} + +static int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card, + struct qeth_ipaddr *addr) +{ + struct qeth_ipato_entry *ipatoe; + u8 addr_bits[128] = {0, }; + u8 ipatoe_bits[128] = {0, }; + int rc = 0; + + if (!card->ipato.enabled) + return 0; + + qeth_l3_convert_addr_to_bits((u8 *) &addr->u, addr_bits, + (addr->proto == QETH_PROT_IPV4)? 4:16); + list_for_each_entry(ipatoe, &card->ipato.entries, entry) { + if (addr->proto != ipatoe->proto) + continue; + qeth_l3_convert_addr_to_bits(ipatoe->addr, ipatoe_bits, + (ipatoe->proto == QETH_PROT_IPV4) ? + 4 : 16); + if (addr->proto == QETH_PROT_IPV4) + rc = !memcmp(addr_bits, ipatoe_bits, + min(32, ipatoe->mask_bits)); + else + rc = !memcmp(addr_bits, ipatoe_bits, + min(128, ipatoe->mask_bits)); + if (rc) + break; + } + /* invert? */ + if ((addr->proto == QETH_PROT_IPV4) && card->ipato.invert4) + rc = !rc; + else if ((addr->proto == QETH_PROT_IPV6) && card->ipato.invert6) + rc = !rc; + + return rc; +} + +/* + * Add IP to be added to todo list. If there is already an "add todo" + * in this list we just incremenent the reference count. + * Returns 0 if we just incremented reference count. + */ +static int __qeth_l3_insert_ip_todo(struct qeth_card *card, + struct qeth_ipaddr *addr, int add) +{ + struct qeth_ipaddr *tmp, *t; + int found = 0; + + list_for_each_entry_safe(tmp, t, card->ip_tbd_list, entry) { + if ((addr->type == QETH_IP_TYPE_DEL_ALL_MC) && + (tmp->type == QETH_IP_TYPE_DEL_ALL_MC)) + return 0; + if ((tmp->proto == QETH_PROT_IPV4) && + (addr->proto == QETH_PROT_IPV4) && + (tmp->type == addr->type) && + (tmp->is_multicast == addr->is_multicast) && + (tmp->u.a4.addr == addr->u.a4.addr) && + (tmp->u.a4.mask == addr->u.a4.mask)) { + found = 1; + break; + } + if ((tmp->proto == QETH_PROT_IPV6) && + (addr->proto == QETH_PROT_IPV6) && + (tmp->type == addr->type) && + (tmp->is_multicast == addr->is_multicast) && + (tmp->u.a6.pfxlen == addr->u.a6.pfxlen) && + (memcmp(&tmp->u.a6.addr, &addr->u.a6.addr, + sizeof(struct in6_addr)) == 0)) { + found = 1; + break; + } + } + if (found) { + if (addr->users != 0) + tmp->users += addr->users; + else + tmp->users += add ? 1 : -1; + if (tmp->users == 0) { + list_del(&tmp->entry); + kfree(tmp); + } + return 0; + } else { + if (addr->type == QETH_IP_TYPE_DEL_ALL_MC) + list_add(&addr->entry, card->ip_tbd_list); + else { + if (addr->users == 0) + addr->users += add ? 1 : -1; + if (add && (addr->type == QETH_IP_TYPE_NORMAL) && + qeth_l3_is_addr_covered_by_ipato(card, addr)) { + QETH_DBF_TEXT(trace, 2, "tkovaddr"); + addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG; + } + list_add_tail(&addr->entry, card->ip_tbd_list); + } + return 1; + } +} + +static int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *addr) +{ + unsigned long flags; + int rc = 0; + + QETH_DBF_TEXT(trace, 4, "delip"); + + if (addr->proto == QETH_PROT_IPV4) + QETH_DBF_HEX(trace, 4, &addr->u.a4.addr, 4); + else { + QETH_DBF_HEX(trace, 4, &addr->u.a6.addr, 8); + QETH_DBF_HEX(trace, 4, ((char *)&addr->u.a6.addr) + 8, 8); + } + spin_lock_irqsave(&card->ip_lock, flags); + rc = __qeth_l3_insert_ip_todo(card, addr, 0); + spin_unlock_irqrestore(&card->ip_lock, flags); + return rc; +} + +static int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *addr) +{ + unsigned long flags; + int rc = 0; + + QETH_DBF_TEXT(trace, 4, "addip"); + if (addr->proto == QETH_PROT_IPV4) + QETH_DBF_HEX(trace, 4, &addr->u.a4.addr, 4); + else { + QETH_DBF_HEX(trace, 4, &addr->u.a6.addr, 8); + QETH_DBF_HEX(trace, 4, ((char *)&addr->u.a6.addr) + 8, 8); + } + spin_lock_irqsave(&card->ip_lock, flags); + rc = __qeth_l3_insert_ip_todo(card, addr, 1); + spin_unlock_irqrestore(&card->ip_lock, flags); + return rc; +} + + +static struct qeth_ipaddr *qeth_l3_get_addr_buffer( + enum qeth_prot_versions prot) +{ + struct qeth_ipaddr *addr; + + addr = kzalloc(sizeof(struct qeth_ipaddr), GFP_ATOMIC); + if (addr == NULL) { + PRINT_WARN("Not enough memory to add address\n"); + return NULL; + } + addr->type = QETH_IP_TYPE_NORMAL; + addr->proto = prot; + return addr; +} + +static void qeth_l3_delete_mc_addresses(struct qeth_card *card) +{ + struct qeth_ipaddr *iptodo; + unsigned long flags; + + QETH_DBF_TEXT(trace, 4, "delmc"); + iptodo = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); + if (!iptodo) { + QETH_DBF_TEXT(trace, 2, "dmcnomem"); + return; + } + iptodo->type = QETH_IP_TYPE_DEL_ALL_MC; + spin_lock_irqsave(&card->ip_lock, flags); + if (!__qeth_l3_insert_ip_todo(card, iptodo, 0)) + kfree(iptodo); + spin_unlock_irqrestore(&card->ip_lock, flags); +} + +/* + * Add/remove address to/from card's ip list, i.e. try to add or remove + * reference to/from an IP address that is already registered on the card. + * Returns: + * 0 address was on card and its reference count has been adjusted, + * but is still > 0, so nothing has to be done + * also returns 0 if card was not on card and the todo was to delete + * the address -> there is also nothing to be done + * 1 address was not on card and the todo is to add it to the card's ip + * list + * -1 address was on card and its reference count has been decremented + * to <= 0 by the todo -> address must be removed from card + */ +static int __qeth_l3_ref_ip_on_card(struct qeth_card *card, + struct qeth_ipaddr *todo, struct qeth_ipaddr **__addr) +{ + struct qeth_ipaddr *addr; + int found = 0; + + list_for_each_entry(addr, &card->ip_list, entry) { + if ((addr->proto == QETH_PROT_IPV4) && + (todo->proto == QETH_PROT_IPV4) && + (addr->type == todo->type) && + (addr->u.a4.addr == todo->u.a4.addr) && + (addr->u.a4.mask == todo->u.a4.mask)) { + found = 1; + break; + } + if ((addr->proto == QETH_PROT_IPV6) && + (todo->proto == QETH_PROT_IPV6) && + (addr->type == todo->type) && + (addr->u.a6.pfxlen == todo->u.a6.pfxlen) && + (memcmp(&addr->u.a6.addr, &todo->u.a6.addr, + sizeof(struct in6_addr)) == 0)) { + found = 1; + break; + } + } + if (found) { + addr->users += todo->users; + if (addr->users <= 0) { + *__addr = addr; + return -1; + } else { + /* for VIPA and RXIP limit refcount to 1 */ + if (addr->type != QETH_IP_TYPE_NORMAL) + addr->users = 1; + return 0; + } + } + if (todo->users > 0) { + /* for VIPA and RXIP limit refcount to 1 */ + if (todo->type != QETH_IP_TYPE_NORMAL) + todo->users = 1; + return 1; + } else + return 0; +} + +static void __qeth_l3_delete_all_mc(struct qeth_card *card, + unsigned long *flags) +{ + struct qeth_ipaddr *addr, *tmp; + int rc; +again: + list_for_each_entry_safe(addr, tmp, &card->ip_list, entry) { + if (addr->is_multicast) { + list_del(&addr->entry); + spin_unlock_irqrestore(&card->ip_lock, *flags); + rc = qeth_l3_deregister_addr_entry(card, addr); + spin_lock_irqsave(&card->ip_lock, *flags); + if (!rc) { + kfree(addr); + goto again; + } else + list_add(&addr->entry, &card->ip_list); + } + } +} + +static void qeth_l3_set_ip_addr_list(struct qeth_card *card) +{ + struct list_head *tbd_list; + struct qeth_ipaddr *todo, *addr; + unsigned long flags; + int rc; + + QETH_DBF_TEXT(trace, 2, "sdiplist"); + QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); + + spin_lock_irqsave(&card->ip_lock, flags); + tbd_list = card->ip_tbd_list; + card->ip_tbd_list = kmalloc(sizeof(struct list_head), GFP_ATOMIC); + if (!card->ip_tbd_list) { + QETH_DBF_TEXT(trace, 0, "silnomem"); + card->ip_tbd_list = tbd_list; + spin_unlock_irqrestore(&card->ip_lock, flags); + return; + } else + INIT_LIST_HEAD(card->ip_tbd_list); + + while (!list_empty(tbd_list)) { + todo = list_entry(tbd_list->next, struct qeth_ipaddr, entry); + list_del(&todo->entry); + if (todo->type == QETH_IP_TYPE_DEL_ALL_MC) { + __qeth_l3_delete_all_mc(card, &flags); + kfree(todo); + continue; + } + rc = __qeth_l3_ref_ip_on_card(card, todo, &addr); + if (rc == 0) { + /* nothing to be done; only adjusted refcount */ + kfree(todo); + } else if (rc == 1) { + /* new entry to be added to on-card list */ + spin_unlock_irqrestore(&card->ip_lock, flags); + rc = qeth_l3_register_addr_entry(card, todo); + spin_lock_irqsave(&card->ip_lock, flags); + if (!rc) + list_add_tail(&todo->entry, &card->ip_list); + else + kfree(todo); + } else if (rc == -1) { + /* on-card entry to be removed */ + list_del_init(&addr->entry); + spin_unlock_irqrestore(&card->ip_lock, flags); + rc = qeth_l3_deregister_addr_entry(card, addr); + spin_lock_irqsave(&card->ip_lock, flags); + if (!rc) + kfree(addr); + else + list_add_tail(&addr->entry, &card->ip_list); + kfree(todo); + } + } + spin_unlock_irqrestore(&card->ip_lock, flags); + kfree(tbd_list); +} + +static void qeth_l3_clear_ip_list(struct qeth_card *card, int clean, + int recover) +{ + struct qeth_ipaddr *addr, *tmp; + unsigned long flags; + + QETH_DBF_TEXT(trace, 4, "clearip"); + spin_lock_irqsave(&card->ip_lock, flags); + /* clear todo list */ + list_for_each_entry_safe(addr, tmp, card->ip_tbd_list, entry) { + list_del(&addr->entry); + kfree(addr); + } + + while (!list_empty(&card->ip_list)) { + addr = list_entry(card->ip_list.next, + struct qeth_ipaddr, entry); + list_del_init(&addr->entry); + if (clean) { + spin_unlock_irqrestore(&card->ip_lock, flags); + qeth_l3_deregister_addr_entry(card, addr); + spin_lock_irqsave(&card->ip_lock, flags); + } + if (!recover || addr->is_multicast) { + kfree(addr); + continue; + } + list_add_tail(&addr->entry, card->ip_tbd_list); + } + spin_unlock_irqrestore(&card->ip_lock, flags); +} + +static int qeth_l3_address_exists_in_list(struct list_head *list, + struct qeth_ipaddr *addr, int same_type) +{ + struct qeth_ipaddr *tmp; + + list_for_each_entry(tmp, list, entry) { + if ((tmp->proto == QETH_PROT_IPV4) && + (addr->proto == QETH_PROT_IPV4) && + ((same_type && (tmp->type == addr->type)) || + (!same_type && (tmp->type != addr->type))) && + (tmp->u.a4.addr == addr->u.a4.addr)) + return 1; + + if ((tmp->proto == QETH_PROT_IPV6) && + (addr->proto == QETH_PROT_IPV6) && + ((same_type && (tmp->type == addr->type)) || + (!same_type && (tmp->type != addr->type))) && + (memcmp(&tmp->u.a6.addr, &addr->u.a6.addr, + sizeof(struct in6_addr)) == 0)) + return 1; + + } + return 0; +} + +static int qeth_l3_send_setdelmc(struct qeth_card *card, + struct qeth_ipaddr *addr, int ipacmd) +{ + int rc; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "setdelmc"); + + iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + memcpy(&cmd->data.setdelipm.mac, addr->mac, OSA_ADDR_LEN); + if (addr->proto == QETH_PROT_IPV6) + memcpy(cmd->data.setdelipm.ip6, &addr->u.a6.addr, + sizeof(struct in6_addr)); + else + memcpy(&cmd->data.setdelipm.ip4, &addr->u.a4.addr, 4); + + rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); + + return rc; +} + +static void qeth_l3_fill_netmask(u8 *netmask, unsigned int len) +{ + int i, j; + for (i = 0; i < 16; i++) { + j = (len) - (i * 8); + if (j >= 8) + netmask[i] = 0xff; + else if (j > 0) + netmask[i] = (u8)(0xFF00 >> j); + else + netmask[i] = 0; + } +} + +static int qeth_l3_send_setdelip(struct qeth_card *card, + struct qeth_ipaddr *addr, int ipacmd, unsigned int flags) +{ + int rc; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + __u8 netmask[16]; + + QETH_DBF_TEXT(trace, 4, "setdelip"); + QETH_DBF_TEXT_(trace, 4, "flags%02X", flags); + + iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + if (addr->proto == QETH_PROT_IPV6) { + memcpy(cmd->data.setdelip6.ip_addr, &addr->u.a6.addr, + sizeof(struct in6_addr)); + qeth_l3_fill_netmask(netmask, addr->u.a6.pfxlen); + memcpy(cmd->data.setdelip6.mask, netmask, + sizeof(struct in6_addr)); + cmd->data.setdelip6.flags = flags; + } else { + memcpy(cmd->data.setdelip4.ip_addr, &addr->u.a4.addr, 4); + memcpy(cmd->data.setdelip4.mask, &addr->u.a4.mask, 4); + cmd->data.setdelip4.flags = flags; + } + + rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); + + return rc; +} + +static int qeth_l3_send_setrouting(struct qeth_card *card, + enum qeth_routing_types type, enum qeth_prot_versions prot) +{ + int rc; + struct qeth_ipa_cmd *cmd; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(trace, 4, "setroutg"); + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETRTG, prot); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setrtg.type = (type); + rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); + + return rc; +} + +static void qeth_l3_correct_routing_type(struct qeth_card *card, + enum qeth_routing_types *type, enum qeth_prot_versions prot) +{ + if (card->info.type == QETH_CARD_TYPE_IQD) { + switch (*type) { + case NO_ROUTER: + case PRIMARY_CONNECTOR: + case SECONDARY_CONNECTOR: + case MULTICAST_ROUTER: + return; + default: + goto out_inval; + } + } else { + switch (*type) { + case NO_ROUTER: + case PRIMARY_ROUTER: + case SECONDARY_ROUTER: + return; + case MULTICAST_ROUTER: + if (qeth_is_ipafunc_supported(card, prot, + IPA_OSA_MC_ROUTER)) + return; + default: + goto out_inval; + } + } +out_inval: + PRINT_WARN("Routing type '%s' not supported for interface %s.\n" + "Router status set to 'no router'.\n", + ((*type == PRIMARY_ROUTER)? "primary router" : + (*type == SECONDARY_ROUTER)? "secondary router" : + (*type == PRIMARY_CONNECTOR)? "primary connector" : + (*type == SECONDARY_CONNECTOR)? "secondary connector" : + (*type == MULTICAST_ROUTER)? "multicast router" : + "unknown"), + card->dev->name); + *type = NO_ROUTER; +} + +int qeth_l3_setrouting_v4(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "setrtg4"); + + qeth_l3_correct_routing_type(card, &card->options.route4.type, + QETH_PROT_IPV4); + + rc = qeth_l3_send_setrouting(card, card->options.route4.type, + QETH_PROT_IPV4); + if (rc) { + card->options.route4.type = NO_ROUTER; + PRINT_WARN("Error (0x%04x) while setting routing type on %s. " + "Type set to 'no router'.\n", + rc, QETH_CARD_IFNAME(card)); + } + return rc; +} + +int qeth_l3_setrouting_v6(struct qeth_card *card) +{ + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "setrtg6"); +#ifdef CONFIG_QETH_IPV6 + + if (!qeth_is_supported(card, IPA_IPV6)) + return 0; + qeth_l3_correct_routing_type(card, &card->options.route6.type, + QETH_PROT_IPV6); + + rc = qeth_l3_send_setrouting(card, card->options.route6.type, + QETH_PROT_IPV6); + if (rc) { + card->options.route6.type = NO_ROUTER; + PRINT_WARN("Error (0x%04x) while setting routing type on %s. " + "Type set to 'no router'.\n", + rc, QETH_CARD_IFNAME(card)); + } +#endif + return rc; +} + +/* + * IP address takeover related functions + */ +static void qeth_l3_clear_ipato_list(struct qeth_card *card) +{ + + struct qeth_ipato_entry *ipatoe, *tmp; + unsigned long flags; + + spin_lock_irqsave(&card->ip_lock, flags); + list_for_each_entry_safe(ipatoe, tmp, &card->ipato.entries, entry) { + list_del(&ipatoe->entry); + kfree(ipatoe); + } + spin_unlock_irqrestore(&card->ip_lock, flags); +} + +int qeth_l3_add_ipato_entry(struct qeth_card *card, + struct qeth_ipato_entry *new) +{ + struct qeth_ipato_entry *ipatoe; + unsigned long flags; + int rc = 0; + + QETH_DBF_TEXT(trace, 2, "addipato"); + spin_lock_irqsave(&card->ip_lock, flags); + list_for_each_entry(ipatoe, &card->ipato.entries, entry) { + if (ipatoe->proto != new->proto) + continue; + if (!memcmp(ipatoe->addr, new->addr, + (ipatoe->proto == QETH_PROT_IPV4)? 4:16) && + (ipatoe->mask_bits == new->mask_bits)) { + PRINT_WARN("ipato entry already exists!\n"); + rc = -EEXIST; + break; + } + } + if (!rc) + list_add_tail(&new->entry, &card->ipato.entries); + + spin_unlock_irqrestore(&card->ip_lock, flags); + return rc; +} + +void qeth_l3_del_ipato_entry(struct qeth_card *card, + enum qeth_prot_versions proto, u8 *addr, int mask_bits) +{ + struct qeth_ipato_entry *ipatoe, *tmp; + unsigned long flags; + + QETH_DBF_TEXT(trace, 2, "delipato"); + spin_lock_irqsave(&card->ip_lock, flags); + list_for_each_entry_safe(ipatoe, tmp, &card->ipato.entries, entry) { + if (ipatoe->proto != proto) + continue; + if (!memcmp(ipatoe->addr, addr, + (proto == QETH_PROT_IPV4)? 4:16) && + (ipatoe->mask_bits == mask_bits)) { + list_del(&ipatoe->entry); + kfree(ipatoe); + } + } + spin_unlock_irqrestore(&card->ip_lock, flags); +} + +/* + * VIPA related functions + */ +int qeth_l3_add_vipa(struct qeth_card *card, enum qeth_prot_versions proto, + const u8 *addr) +{ + struct qeth_ipaddr *ipaddr; + unsigned long flags; + int rc = 0; + + ipaddr = qeth_l3_get_addr_buffer(proto); + if (ipaddr) { + if (proto == QETH_PROT_IPV4) { + QETH_DBF_TEXT(trace, 2, "addvipa4"); + memcpy(&ipaddr->u.a4.addr, addr, 4); + ipaddr->u.a4.mask = 0; + } else if (proto == QETH_PROT_IPV6) { + QETH_DBF_TEXT(trace, 2, "addvipa6"); + memcpy(&ipaddr->u.a6.addr, addr, 16); + ipaddr->u.a6.pfxlen = 0; + } + ipaddr->type = QETH_IP_TYPE_VIPA; + ipaddr->set_flags = QETH_IPA_SETIP_VIPA_FLAG; + ipaddr->del_flags = QETH_IPA_DELIP_VIPA_FLAG; + } else + return -ENOMEM; + spin_lock_irqsave(&card->ip_lock, flags); + if (qeth_l3_address_exists_in_list(&card->ip_list, ipaddr, 0) || + qeth_l3_address_exists_in_list(card->ip_tbd_list, ipaddr, 0)) + rc = -EEXIST; + spin_unlock_irqrestore(&card->ip_lock, flags); + if (rc) { + PRINT_WARN("Cannot add VIPA. Address already exists!\n"); + return rc; + } + if (!qeth_l3_add_ip(card, ipaddr)) + kfree(ipaddr); + qeth_l3_set_ip_addr_list(card); + return rc; +} + +void qeth_l3_del_vipa(struct qeth_card *card, enum qeth_prot_versions proto, + const u8 *addr) +{ + struct qeth_ipaddr *ipaddr; + + ipaddr = qeth_l3_get_addr_buffer(proto); + if (ipaddr) { + if (proto == QETH_PROT_IPV4) { + QETH_DBF_TEXT(trace, 2, "delvipa4"); + memcpy(&ipaddr->u.a4.addr, addr, 4); + ipaddr->u.a4.mask = 0; + } else if (proto == QETH_PROT_IPV6) { + QETH_DBF_TEXT(trace, 2, "delvipa6"); + memcpy(&ipaddr->u.a6.addr, addr, 16); + ipaddr->u.a6.pfxlen = 0; + } + ipaddr->type = QETH_IP_TYPE_VIPA; + } else + return; + if (!qeth_l3_delete_ip(card, ipaddr)) + kfree(ipaddr); + qeth_l3_set_ip_addr_list(card); +} + +/* + * proxy ARP related functions + */ +int qeth_l3_add_rxip(struct qeth_card *card, enum qeth_prot_versions proto, + const u8 *addr) +{ + struct qeth_ipaddr *ipaddr; + unsigned long flags; + int rc = 0; + + ipaddr = qeth_l3_get_addr_buffer(proto); + if (ipaddr) { + if (proto == QETH_PROT_IPV4) { + QETH_DBF_TEXT(trace, 2, "addrxip4"); + memcpy(&ipaddr->u.a4.addr, addr, 4); + ipaddr->u.a4.mask = 0; + } else if (proto == QETH_PROT_IPV6) { + QETH_DBF_TEXT(trace, 2, "addrxip6"); + memcpy(&ipaddr->u.a6.addr, addr, 16); + ipaddr->u.a6.pfxlen = 0; + } + ipaddr->type = QETH_IP_TYPE_RXIP; + ipaddr->set_flags = QETH_IPA_SETIP_TAKEOVER_FLAG; + ipaddr->del_flags = 0; + } else + return -ENOMEM; + spin_lock_irqsave(&card->ip_lock, flags); + if (qeth_l3_address_exists_in_list(&card->ip_list, ipaddr, 0) || + qeth_l3_address_exists_in_list(card->ip_tbd_list, ipaddr, 0)) + rc = -EEXIST; + spin_unlock_irqrestore(&card->ip_lock, flags); + if (rc) { + PRINT_WARN("Cannot add RXIP. Address already exists!\n"); + return rc; + } + if (!qeth_l3_add_ip(card, ipaddr)) + kfree(ipaddr); + qeth_l3_set_ip_addr_list(card); + return 0; +} + +void qeth_l3_del_rxip(struct qeth_card *card, enum qeth_prot_versions proto, + const u8 *addr) +{ + struct qeth_ipaddr *ipaddr; + + ipaddr = qeth_l3_get_addr_buffer(proto); + if (ipaddr) { + if (proto == QETH_PROT_IPV4) { + QETH_DBF_TEXT(trace, 2, "addrxip4"); + memcpy(&ipaddr->u.a4.addr, addr, 4); + ipaddr->u.a4.mask = 0; + } else if (proto == QETH_PROT_IPV6) { + QETH_DBF_TEXT(trace, 2, "addrxip6"); + memcpy(&ipaddr->u.a6.addr, addr, 16); + ipaddr->u.a6.pfxlen = 0; + } + ipaddr->type = QETH_IP_TYPE_RXIP; + } else + return; + if (!qeth_l3_delete_ip(card, ipaddr)) + kfree(ipaddr); + qeth_l3_set_ip_addr_list(card); +} + +static int qeth_l3_register_addr_entry(struct qeth_card *card, + struct qeth_ipaddr *addr) +{ + char buf[50]; + int rc = 0; + int cnt = 3; + + if (addr->proto == QETH_PROT_IPV4) { + QETH_DBF_TEXT(trace, 2, "setaddr4"); + QETH_DBF_HEX(trace, 3, &addr->u.a4.addr, sizeof(int)); + } else if (addr->proto == QETH_PROT_IPV6) { + QETH_DBF_TEXT(trace, 2, "setaddr6"); + QETH_DBF_HEX(trace, 3, &addr->u.a6.addr, 8); + QETH_DBF_HEX(trace, 3, ((char *)&addr->u.a6.addr) + 8, 8); + } else { + QETH_DBF_TEXT(trace, 2, "setaddr?"); + QETH_DBF_HEX(trace, 3, addr, sizeof(struct qeth_ipaddr)); + } + do { + if (addr->is_multicast) + rc = qeth_l3_send_setdelmc(card, addr, IPA_CMD_SETIPM); + else + rc = qeth_l3_send_setdelip(card, addr, IPA_CMD_SETIP, + addr->set_flags); + if (rc) + QETH_DBF_TEXT(trace, 2, "failed"); + } while ((--cnt > 0) && rc); + if (rc) { + QETH_DBF_TEXT(trace, 2, "FAILED"); + qeth_l3_ipaddr_to_string(addr->proto, (u8 *)&addr->u, buf); + PRINT_WARN("Could not register IP address %s (rc=0x%x/%d)\n", + buf, rc, rc); + } + return rc; +} + +static int qeth_l3_deregister_addr_entry(struct qeth_card *card, + struct qeth_ipaddr *addr) +{ + int rc = 0; + + if (addr->proto == QETH_PROT_IPV4) { + QETH_DBF_TEXT(trace, 2, "deladdr4"); + QETH_DBF_HEX(trace, 3, &addr->u.a4.addr, sizeof(int)); + } else if (addr->proto == QETH_PROT_IPV6) { + QETH_DBF_TEXT(trace, 2, "deladdr6"); + QETH_DBF_HEX(trace, 3, &addr->u.a6.addr, 8); + QETH_DBF_HEX(trace, 3, ((char *)&addr->u.a6.addr) + 8, 8); + } else { + QETH_DBF_TEXT(trace, 2, "deladdr?"); + QETH_DBF_HEX(trace, 3, addr, sizeof(struct qeth_ipaddr)); + } + if (addr->is_multicast) + rc = qeth_l3_send_setdelmc(card, addr, IPA_CMD_DELIPM); + else + rc = qeth_l3_send_setdelip(card, addr, IPA_CMD_DELIP, + addr->del_flags); + if (rc) { + QETH_DBF_TEXT(trace, 2, "failed"); + /* TODO: re-activate this warning as soon as we have a + * clean mirco code + qeth_ipaddr_to_string(addr->proto, (u8 *)&addr->u, buf); + PRINT_WARN("Could not deregister IP address %s (rc=%x)\n", + buf, rc); + */ + } + + return rc; +} + +static inline u8 qeth_l3_get_qeth_hdr_flags4(int cast_type) +{ + if (cast_type == RTN_MULTICAST) + return QETH_CAST_MULTICAST; + if (cast_type == RTN_BROADCAST) + return QETH_CAST_BROADCAST; + return QETH_CAST_UNICAST; +} + +static inline u8 qeth_l3_get_qeth_hdr_flags6(int cast_type) +{ + u8 ct = QETH_HDR_PASSTHRU | QETH_HDR_IPV6; + if (cast_type == RTN_MULTICAST) + return ct | QETH_CAST_MULTICAST; + if (cast_type == RTN_ANYCAST) + return ct | QETH_CAST_ANYCAST; + if (cast_type == RTN_BROADCAST) + return ct | QETH_CAST_BROADCAST; + return ct | QETH_CAST_UNICAST; +} + +static int qeth_l3_send_setadp_mode(struct qeth_card *card, __u32 command, + __u32 mode) +{ + int rc; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "adpmode"); + + iob = qeth_get_adapter_cmd(card, command, + sizeof(struct qeth_ipacmd_setadpparms)); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setadapterparms.data.mode = mode; + rc = qeth_send_ipa_cmd(card, iob, qeth_default_setadapterparms_cb, + NULL); + return rc; +} + +static int qeth_l3_setadapter_hstr(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 4, "adphstr"); + + if (qeth_adp_supported(card, IPA_SETADP_SET_BROADCAST_MODE)) { + rc = qeth_l3_send_setadp_mode(card, + IPA_SETADP_SET_BROADCAST_MODE, + card->options.broadcast_mode); + if (rc) + PRINT_WARN("couldn't set broadcast mode on " + "device %s: x%x\n", + CARD_BUS_ID(card), rc); + rc = qeth_l3_send_setadp_mode(card, + IPA_SETADP_ALTER_MAC_ADDRESS, + card->options.macaddr_mode); + if (rc) + PRINT_WARN("couldn't set macaddr mode on " + "device %s: x%x\n", CARD_BUS_ID(card), rc); + return rc; + } + if (card->options.broadcast_mode == QETH_TR_BROADCAST_LOCAL) + PRINT_WARN("set adapter parameters not available " + "to set broadcast mode, using ALLRINGS " + "on device %s:\n", CARD_BUS_ID(card)); + if (card->options.macaddr_mode == QETH_TR_MACADDR_CANONICAL) + PRINT_WARN("set adapter parameters not available " + "to set macaddr mode, using NONCANONICAL " + "on device %s:\n", CARD_BUS_ID(card)); + return 0; +} + +static int qeth_l3_setadapter_parms(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(setup, 2, "setadprm"); + + if (!qeth_is_supported(card, IPA_SETADAPTERPARMS)) { + PRINT_WARN("set adapter parameters not supported " + "on device %s.\n", + CARD_BUS_ID(card)); + QETH_DBF_TEXT(setup, 2, " notsupp"); + return 0; + } + rc = qeth_query_setadapterparms(card); + if (rc) { + PRINT_WARN("couldn't set adapter parameters on device %s: " + "x%x\n", CARD_BUS_ID(card), rc); + return rc; + } + if (qeth_adp_supported(card, IPA_SETADP_ALTER_MAC_ADDRESS)) { + rc = qeth_setadpparms_change_macaddr(card); + if (rc) + PRINT_WARN("couldn't get MAC address on " + "device %s: x%x\n", + CARD_BUS_ID(card), rc); + } + + if ((card->info.link_type == QETH_LINK_TYPE_HSTR) || + (card->info.link_type == QETH_LINK_TYPE_LANE_TR)) + rc = qeth_l3_setadapter_hstr(card); + + return rc; +} + +static int qeth_l3_default_setassparms_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "defadpcb"); + + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code == 0) { + cmd->hdr.return_code = cmd->data.setassparms.hdr.return_code; + if (cmd->hdr.prot_version == QETH_PROT_IPV4) + card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled; + if (cmd->hdr.prot_version == QETH_PROT_IPV6) + card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; + } + if (cmd->data.setassparms.hdr.assist_no == IPA_INBOUND_CHECKSUM && + cmd->data.setassparms.hdr.command_code == IPA_CMD_ASS_START) { + card->info.csum_mask = cmd->data.setassparms.data.flags_32bit; + QETH_DBF_TEXT_(trace, 3, "csum:%d", card->info.csum_mask); + } + return 0; +} + +static struct qeth_cmd_buffer *qeth_l3_get_setassparms_cmd( + struct qeth_card *card, enum qeth_ipa_funcs ipa_func, __u16 cmd_code, + __u16 len, enum qeth_prot_versions prot) +{ + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "getasscm"); + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETASSPARMS, prot); + + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + cmd->data.setassparms.hdr.assist_no = ipa_func; + cmd->data.setassparms.hdr.length = 8 + len; + cmd->data.setassparms.hdr.command_code = cmd_code; + cmd->data.setassparms.hdr.return_code = 0; + cmd->data.setassparms.hdr.seq_no = 0; + + return iob; +} + +static int qeth_l3_send_setassparms(struct qeth_card *card, + struct qeth_cmd_buffer *iob, __u16 len, long data, + int (*reply_cb)(struct qeth_card *, struct qeth_reply *, + unsigned long), + void *reply_param) +{ + int rc; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 4, "sendassp"); + + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + if (len <= sizeof(__u32)) + cmd->data.setassparms.data.flags_32bit = (__u32) data; + else /* (len > sizeof(__u32)) */ + memcpy(&cmd->data.setassparms.data, (void *) data, len); + + rc = qeth_send_ipa_cmd(card, iob, reply_cb, reply_param); + return rc; +} + +#ifdef CONFIG_QETH_IPV6 +static int qeth_l3_send_simple_setassparms_ipv6(struct qeth_card *card, + enum qeth_ipa_funcs ipa_func, __u16 cmd_code) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(trace, 4, "simassp6"); + iob = qeth_l3_get_setassparms_cmd(card, ipa_func, cmd_code, + 0, QETH_PROT_IPV6); + rc = qeth_l3_send_setassparms(card, iob, 0, 0, + qeth_l3_default_setassparms_cb, NULL); + return rc; +} +#endif + +static int qeth_l3_send_simple_setassparms(struct qeth_card *card, + enum qeth_ipa_funcs ipa_func, __u16 cmd_code, long data) +{ + int rc; + int length = 0; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT(trace, 4, "simassp4"); + if (data) + length = sizeof(__u32); + iob = qeth_l3_get_setassparms_cmd(card, ipa_func, cmd_code, + length, QETH_PROT_IPV4); + rc = qeth_l3_send_setassparms(card, iob, length, data, + qeth_l3_default_setassparms_cb, NULL); + return rc; +} + +static int qeth_l3_start_ipa_arp_processing(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "ipaarp"); + + if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { + PRINT_WARN("ARP processing not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return 0; + } + rc = qeth_l3_send_simple_setassparms(card, IPA_ARP_PROCESSING, + IPA_CMD_ASS_START, 0); + if (rc) { + PRINT_WARN("Could not start ARP processing " + "assist on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + } + return rc; +} + +static int qeth_l3_start_ipa_ip_fragmentation(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "ipaipfrg"); + + if (!qeth_is_supported(card, IPA_IP_FRAGMENTATION)) { + PRINT_INFO("Hardware IP fragmentation not supported on %s\n", + QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + + rc = qeth_l3_send_simple_setassparms(card, IPA_IP_FRAGMENTATION, + IPA_CMD_ASS_START, 0); + if (rc) { + PRINT_WARN("Could not start Hardware IP fragmentation " + "assist on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + } else + PRINT_INFO("Hardware IP fragmentation enabled \n"); + return rc; +} + +static int qeth_l3_start_ipa_source_mac(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "stsrcmac"); + + if (!card->options.fake_ll) + return -EOPNOTSUPP; + + if (!qeth_is_supported(card, IPA_SOURCE_MAC)) { + PRINT_INFO("Inbound source address not " + "supported on %s\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + + rc = qeth_l3_send_simple_setassparms(card, IPA_SOURCE_MAC, + IPA_CMD_ASS_START, 0); + if (rc) + PRINT_WARN("Could not start inbound source " + "assist on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + return rc; +} + +static int qeth_l3_start_ipa_vlan(struct qeth_card *card) +{ + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "strtvlan"); + + if (!qeth_is_supported(card, IPA_FULL_VLAN)) { + PRINT_WARN("VLAN not supported on %s\n", + QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + + rc = qeth_l3_send_simple_setassparms(card, IPA_VLAN_PRIO, + IPA_CMD_ASS_START, 0); + if (rc) { + PRINT_WARN("Could not start vlan " + "assist on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + } else { + PRINT_INFO("VLAN enabled \n"); + } + return rc; +} + +static int qeth_l3_start_ipa_multicast(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "stmcast"); + + if (!qeth_is_supported(card, IPA_MULTICASTING)) { + PRINT_WARN("Multicast not supported on %s\n", + QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + + rc = qeth_l3_send_simple_setassparms(card, IPA_MULTICASTING, + IPA_CMD_ASS_START, 0); + if (rc) { + PRINT_WARN("Could not start multicast " + "assist on %s: rc=%i\n", + QETH_CARD_IFNAME(card), rc); + } else { + PRINT_INFO("Multicast enabled\n"); + card->dev->flags |= IFF_MULTICAST; + } + return rc; +} + +static int qeth_l3_query_ipassists_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(setup, 2, "qipasscb"); + + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.prot_version == QETH_PROT_IPV4) { + card->options.ipa4.supported_funcs = cmd->hdr.ipa_supported; + card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled; + } else { + card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported; + card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; + } + QETH_DBF_TEXT(setup, 2, "suppenbl"); + QETH_DBF_TEXT_(setup, 2, "%x", cmd->hdr.ipa_supported); + QETH_DBF_TEXT_(setup, 2, "%x", cmd->hdr.ipa_enabled); + return 0; +} + +static int qeth_l3_query_ipassists(struct qeth_card *card, + enum qeth_prot_versions prot) +{ + int rc; + struct qeth_cmd_buffer *iob; + + QETH_DBF_TEXT_(setup, 2, "qipassi%i", prot); + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_QIPASSIST, prot); + rc = qeth_send_ipa_cmd(card, iob, qeth_l3_query_ipassists_cb, NULL); + return rc; +} + +#ifdef CONFIG_QETH_IPV6 +static int qeth_l3_softsetup_ipv6(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "softipv6"); + + if (card->info.type == QETH_CARD_TYPE_IQD) + goto out; + + rc = qeth_l3_query_ipassists(card, QETH_PROT_IPV6); + if (rc) { + PRINT_ERR("IPv6 query ipassist failed on %s\n", + QETH_CARD_IFNAME(card)); + return rc; + } + rc = qeth_l3_send_simple_setassparms(card, IPA_IPV6, + IPA_CMD_ASS_START, 3); + if (rc) { + PRINT_WARN("IPv6 start assist (version 4) failed " + "on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + return rc; + } + rc = qeth_l3_send_simple_setassparms_ipv6(card, IPA_IPV6, + IPA_CMD_ASS_START); + if (rc) { + PRINT_WARN("IPV6 start assist (version 6) failed " + "on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + return rc; + } + rc = qeth_l3_send_simple_setassparms_ipv6(card, IPA_PASSTHRU, + IPA_CMD_ASS_START); + if (rc) { + PRINT_WARN("Could not enable passthrough " + "on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + return rc; + } +out: + PRINT_INFO("IPV6 enabled \n"); + return 0; +} +#endif + +static int qeth_l3_start_ipa_ipv6(struct qeth_card *card) +{ + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "strtipv6"); + + if (!qeth_is_supported(card, IPA_IPV6)) { + PRINT_WARN("IPv6 not supported on %s\n", + QETH_CARD_IFNAME(card)); + return 0; + } +#ifdef CONFIG_QETH_IPV6 + rc = qeth_l3_softsetup_ipv6(card); +#endif + return rc ; +} + +static int qeth_l3_start_ipa_broadcast(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "stbrdcst"); + card->info.broadcast_capable = 0; + if (!qeth_is_supported(card, IPA_FILTERING)) { + PRINT_WARN("Broadcast not supported on %s\n", + QETH_CARD_IFNAME(card)); + rc = -EOPNOTSUPP; + goto out; + } + rc = qeth_l3_send_simple_setassparms(card, IPA_FILTERING, + IPA_CMD_ASS_START, 0); + if (rc) { + PRINT_WARN("Could not enable broadcasting filtering " + "on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + goto out; + } + + rc = qeth_l3_send_simple_setassparms(card, IPA_FILTERING, + IPA_CMD_ASS_CONFIGURE, 1); + if (rc) { + PRINT_WARN("Could not set up broadcast filtering on %s: 0x%x\n", + QETH_CARD_IFNAME(card), rc); + goto out; + } + card->info.broadcast_capable = QETH_BROADCAST_WITH_ECHO; + PRINT_INFO("Broadcast enabled \n"); + rc = qeth_l3_send_simple_setassparms(card, IPA_FILTERING, + IPA_CMD_ASS_ENABLE, 1); + if (rc) { + PRINT_WARN("Could not set up broadcast echo filtering on " + "%s: 0x%x\n", QETH_CARD_IFNAME(card), rc); + goto out; + } + card->info.broadcast_capable = QETH_BROADCAST_WITHOUT_ECHO; +out: + if (card->info.broadcast_capable) + card->dev->flags |= IFF_BROADCAST; + else + card->dev->flags &= ~IFF_BROADCAST; + return rc; +} + +static int qeth_l3_send_checksum_command(struct qeth_card *card) +{ + int rc; + + rc = qeth_l3_send_simple_setassparms(card, IPA_INBOUND_CHECKSUM, + IPA_CMD_ASS_START, 0); + if (rc) { + PRINT_WARN("Starting Inbound HW Checksumming failed on %s: " + "0x%x,\ncontinuing using Inbound SW Checksumming\n", + QETH_CARD_IFNAME(card), rc); + return rc; + } + rc = qeth_l3_send_simple_setassparms(card, IPA_INBOUND_CHECKSUM, + IPA_CMD_ASS_ENABLE, + card->info.csum_mask); + if (rc) { + PRINT_WARN("Enabling Inbound HW Checksumming failed on %s: " + "0x%x,\ncontinuing using Inbound SW Checksumming\n", + QETH_CARD_IFNAME(card), rc); + return rc; + } + return 0; +} + +static int qeth_l3_start_ipa_checksum(struct qeth_card *card) +{ + int rc = 0; + + QETH_DBF_TEXT(trace, 3, "strtcsum"); + + if (card->options.checksum_type == NO_CHECKSUMMING) { + PRINT_WARN("Using no checksumming on %s.\n", + QETH_CARD_IFNAME(card)); + return 0; + } + if (card->options.checksum_type == SW_CHECKSUMMING) { + PRINT_WARN("Using SW checksumming on %s.\n", + QETH_CARD_IFNAME(card)); + return 0; + } + if (!qeth_is_supported(card, IPA_INBOUND_CHECKSUM)) { + PRINT_WARN("Inbound HW Checksumming not " + "supported on %s,\ncontinuing " + "using Inbound SW Checksumming\n", + QETH_CARD_IFNAME(card)); + card->options.checksum_type = SW_CHECKSUMMING; + return 0; + } + rc = qeth_l3_send_checksum_command(card); + if (!rc) + PRINT_INFO("HW Checksumming (inbound) enabled \n"); + + return rc; +} + +static int qeth_l3_start_ipa_tso(struct qeth_card *card) +{ + int rc; + + QETH_DBF_TEXT(trace, 3, "sttso"); + + if (!qeth_is_supported(card, IPA_OUTBOUND_TSO)) { + PRINT_WARN("Outbound TSO not supported on %s\n", + QETH_CARD_IFNAME(card)); + rc = -EOPNOTSUPP; + } else { + rc = qeth_l3_send_simple_setassparms(card, IPA_OUTBOUND_TSO, + IPA_CMD_ASS_START, 0); + if (rc) + PRINT_WARN("Could not start outbound TSO " + "assist on %s: rc=%i\n", + QETH_CARD_IFNAME(card), rc); + else + PRINT_INFO("Outbound TSO enabled\n"); + } + if (rc && (card->options.large_send == QETH_LARGE_SEND_TSO)) { + card->options.large_send = QETH_LARGE_SEND_NO; + card->dev->features &= ~(NETIF_F_TSO | NETIF_F_SG); + } + return rc; +} + +static int qeth_l3_start_ipassists(struct qeth_card *card) +{ + QETH_DBF_TEXT(trace, 3, "strtipas"); + qeth_l3_start_ipa_arp_processing(card); /* go on*/ + qeth_l3_start_ipa_ip_fragmentation(card); /* go on*/ + qeth_l3_start_ipa_source_mac(card); /* go on*/ + qeth_l3_start_ipa_vlan(card); /* go on*/ + qeth_l3_start_ipa_multicast(card); /* go on*/ + qeth_l3_start_ipa_ipv6(card); /* go on*/ + qeth_l3_start_ipa_broadcast(card); /* go on*/ + qeth_l3_start_ipa_checksum(card); /* go on*/ + qeth_l3_start_ipa_tso(card); /* go on*/ + return 0; +} + +static int qeth_l3_put_unique_id(struct qeth_card *card) +{ + + int rc = 0; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(trace, 2, "puniqeid"); + + if ((card->info.unique_id & UNIQUE_ID_NOT_BY_CARD) == + UNIQUE_ID_NOT_BY_CARD) + return -1; + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_DESTROY_ADDR, + QETH_PROT_IPV6); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = + card->info.unique_id; + memcpy(&cmd->data.create_destroy_addr.unique_id[0], + card->dev->dev_addr, OSA_ADDR_LEN); + rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); + return rc; +} + +static int qeth_l3_iqd_read_initial_mac_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code == 0) + memcpy(card->dev->dev_addr, + cmd->data.create_destroy_addr.unique_id, ETH_ALEN); + else + random_ether_addr(card->dev->dev_addr); + + return 0; +} + +static int qeth_l3_iqd_read_initial_mac(struct qeth_card *card) +{ + int rc = 0; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(setup, 2, "hsrmac"); + + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR, + QETH_PROT_IPV6); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = + card->info.unique_id; + + rc = qeth_send_ipa_cmd(card, iob, qeth_l3_iqd_read_initial_mac_cb, + NULL); + return rc; +} + +static int qeth_l3_get_unique_id_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code == 0) + card->info.unique_id = *((__u16 *) + &cmd->data.create_destroy_addr.unique_id[6]); + else { + card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | + UNIQUE_ID_NOT_BY_CARD; + PRINT_WARN("couldn't get a unique id from the card on device " + "%s (result=x%x), using default id. ipv6 " + "autoconfig on other lpars may lead to duplicate " + "ip addresses. please use manually " + "configured ones.\n", + CARD_BUS_ID(card), cmd->hdr.return_code); + } + return 0; +} + +static int qeth_l3_get_unique_id(struct qeth_card *card) +{ + int rc = 0; + struct qeth_cmd_buffer *iob; + struct qeth_ipa_cmd *cmd; + + QETH_DBF_TEXT(setup, 2, "guniqeid"); + + if (!qeth_is_supported(card, IPA_IPV6)) { + card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | + UNIQUE_ID_NOT_BY_CARD; + return 0; + } + + iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR, + QETH_PROT_IPV6); + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); + *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = + card->info.unique_id; + + rc = qeth_send_ipa_cmd(card, iob, qeth_l3_get_unique_id_cb, NULL); + return rc; +} + +static void qeth_l3_get_mac_for_ipm(__u32 ipm, char *mac, + struct net_device *dev) +{ + if (dev->type == ARPHRD_IEEE802_TR) + ip_tr_mc_map(ipm, mac); + else + ip_eth_mc_map(ipm, mac); +} + +static void qeth_l3_add_mc(struct qeth_card *card, struct in_device *in4_dev) +{ + struct qeth_ipaddr *ipm; + struct ip_mc_list *im4; + char buf[MAX_ADDR_LEN]; + + QETH_DBF_TEXT(trace, 4, "addmc"); + for (im4 = in4_dev->mc_list; im4; im4 = im4->next) { + qeth_l3_get_mac_for_ipm(im4->multiaddr, buf, in4_dev->dev); + ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); + if (!ipm) + continue; + ipm->u.a4.addr = im4->multiaddr; + memcpy(ipm->mac, buf, OSA_ADDR_LEN); + ipm->is_multicast = 1; + if (!qeth_l3_add_ip(card, ipm)) + kfree(ipm); + } +} + +static void qeth_l3_add_vlan_mc(struct qeth_card *card) +{ + struct in_device *in_dev; + struct vlan_group *vg; + int i; + + QETH_DBF_TEXT(trace, 4, "addmcvl"); + if (!qeth_is_supported(card, IPA_FULL_VLAN) || (card->vlangrp == NULL)) + return; + + vg = card->vlangrp; + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + struct net_device *netdev = vlan_group_get_device(vg, i); + if (netdev == NULL || + !(netdev->flags & IFF_UP)) + continue; + in_dev = in_dev_get(netdev); + if (!in_dev) + continue; + read_lock(&in_dev->mc_list_lock); + qeth_l3_add_mc(card, in_dev); + read_unlock(&in_dev->mc_list_lock); + in_dev_put(in_dev); + } +} + +static void qeth_l3_add_multicast_ipv4(struct qeth_card *card) +{ + struct in_device *in4_dev; + + QETH_DBF_TEXT(trace, 4, "chkmcv4"); + in4_dev = in_dev_get(card->dev); + if (in4_dev == NULL) + return; + read_lock(&in4_dev->mc_list_lock); + qeth_l3_add_mc(card, in4_dev); + qeth_l3_add_vlan_mc(card); + read_unlock(&in4_dev->mc_list_lock); + in_dev_put(in4_dev); +} + +#ifdef CONFIG_QETH_IPV6 +static void qeth_l3_add_mc6(struct qeth_card *card, struct inet6_dev *in6_dev) +{ + struct qeth_ipaddr *ipm; + struct ifmcaddr6 *im6; + char buf[MAX_ADDR_LEN]; + + QETH_DBF_TEXT(trace, 4, "addmc6"); + for (im6 = in6_dev->mc_list; im6 != NULL; im6 = im6->next) { + ndisc_mc_map(&im6->mca_addr, buf, in6_dev->dev, 0); + ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV6); + if (!ipm) + continue; + ipm->is_multicast = 1; + memcpy(ipm->mac, buf, OSA_ADDR_LEN); + memcpy(&ipm->u.a6.addr, &im6->mca_addr.s6_addr, + sizeof(struct in6_addr)); + if (!qeth_l3_add_ip(card, ipm)) + kfree(ipm); + } +} + +static void qeth_l3_add_vlan_mc6(struct qeth_card *card) +{ + struct inet6_dev *in_dev; + struct vlan_group *vg; + int i; + + QETH_DBF_TEXT(trace, 4, "admc6vl"); + if (!qeth_is_supported(card, IPA_FULL_VLAN) || (card->vlangrp == NULL)) + return; + + vg = card->vlangrp; + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + struct net_device *netdev = vlan_group_get_device(vg, i); + if (netdev == NULL || + !(netdev->flags & IFF_UP)) + continue; + in_dev = in6_dev_get(netdev); + if (!in_dev) + continue; + read_lock_bh(&in_dev->lock); + qeth_l3_add_mc6(card, in_dev); + read_unlock_bh(&in_dev->lock); + in6_dev_put(in_dev); + } +} + +static void qeth_l3_add_multicast_ipv6(struct qeth_card *card) +{ + struct inet6_dev *in6_dev; + + QETH_DBF_TEXT(trace, 4, "chkmcv6"); + if (!qeth_is_supported(card, IPA_IPV6)) + return ; + in6_dev = in6_dev_get(card->dev); + if (in6_dev == NULL) + return; + read_lock_bh(&in6_dev->lock); + qeth_l3_add_mc6(card, in6_dev); + qeth_l3_add_vlan_mc6(card); + read_unlock_bh(&in6_dev->lock); + in6_dev_put(in6_dev); +} +#endif /* CONFIG_QETH_IPV6 */ + +static void qeth_l3_free_vlan_addresses4(struct qeth_card *card, + unsigned short vid) +{ + struct in_device *in_dev; + struct in_ifaddr *ifa; + struct qeth_ipaddr *addr; + + QETH_DBF_TEXT(trace, 4, "frvaddr4"); + + in_dev = in_dev_get(vlan_group_get_device(card->vlangrp, vid)); + if (!in_dev) + return; + for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) { + addr = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); + if (addr) { + addr->u.a4.addr = ifa->ifa_address; + addr->u.a4.mask = ifa->ifa_mask; + addr->type = QETH_IP_TYPE_NORMAL; + if (!qeth_l3_delete_ip(card, addr)) + kfree(addr); + } + } + in_dev_put(in_dev); +} + +static void qeth_l3_free_vlan_addresses6(struct qeth_card *card, + unsigned short vid) +{ +#ifdef CONFIG_QETH_IPV6 + struct inet6_dev *in6_dev; + struct inet6_ifaddr *ifa; + struct qeth_ipaddr *addr; + + QETH_DBF_TEXT(trace, 4, "frvaddr6"); + + in6_dev = in6_dev_get(vlan_group_get_device(card->vlangrp, vid)); + if (!in6_dev) + return; + for (ifa = in6_dev->addr_list; ifa; ifa = ifa->lst_next) { + addr = qeth_l3_get_addr_buffer(QETH_PROT_IPV6); + if (addr) { + memcpy(&addr->u.a6.addr, &ifa->addr, + sizeof(struct in6_addr)); + addr->u.a6.pfxlen = ifa->prefix_len; + addr->type = QETH_IP_TYPE_NORMAL; + if (!qeth_l3_delete_ip(card, addr)) + kfree(addr); + } + } + in6_dev_put(in6_dev); +#endif /* CONFIG_QETH_IPV6 */ +} + +static void qeth_l3_free_vlan_addresses(struct qeth_card *card, + unsigned short vid) +{ + if (!card->vlangrp) + return; + qeth_l3_free_vlan_addresses4(card, vid); + qeth_l3_free_vlan_addresses6(card, vid); +} + +static void qeth_l3_vlan_rx_register(struct net_device *dev, + struct vlan_group *grp) +{ + struct qeth_card *card = netdev_priv(dev); + unsigned long flags; + + QETH_DBF_TEXT(trace, 4, "vlanreg"); + spin_lock_irqsave(&card->vlanlock, flags); + card->vlangrp = grp; + spin_unlock_irqrestore(&card->vlanlock, flags); +} + +static void qeth_l3_vlan_rx_add_vid(struct net_device *dev, unsigned short vid) +{ + struct net_device *vlandev; + struct qeth_card *card = (struct qeth_card *) dev->priv; + struct in_device *in_dev; + + if (card->info.type == QETH_CARD_TYPE_IQD) + return; + + vlandev = vlan_group_get_device(card->vlangrp, vid); + vlandev->neigh_setup = qeth_l3_neigh_setup; + + in_dev = in_dev_get(vlandev); +#ifdef CONFIG_SYSCTL + neigh_sysctl_unregister(in_dev->arp_parms); +#endif + neigh_parms_release(&arp_tbl, in_dev->arp_parms); + + in_dev->arp_parms = neigh_parms_alloc(vlandev, &arp_tbl); +#ifdef CONFIG_SYSCTL + neigh_sysctl_register(vlandev, in_dev->arp_parms, NET_IPV4, + NET_IPV4_NEIGH, "ipv4", NULL, NULL); +#endif + in_dev_put(in_dev); + return; +} + +static void qeth_l3_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) +{ + struct qeth_card *card = netdev_priv(dev); + unsigned long flags; + + QETH_DBF_TEXT_(trace, 4, "kid:%d", vid); + spin_lock_irqsave(&card->vlanlock, flags); + /* unregister IP addresses of vlan device */ + qeth_l3_free_vlan_addresses(card, vid); + vlan_group_set_device(card->vlangrp, vid, NULL); + spin_unlock_irqrestore(&card->vlanlock, flags); + qeth_l3_set_multicast_list(card->dev); +} + +static inline __u16 qeth_l3_rebuild_skb(struct qeth_card *card, + struct sk_buff *skb, struct qeth_hdr *hdr) +{ + unsigned short vlan_id = 0; + __be16 prot; + struct iphdr *ip_hdr; + unsigned char tg_addr[MAX_ADDR_LEN]; + + if (!(hdr->hdr.l3.flags & QETH_HDR_PASSTHRU)) { + prot = htons((hdr->hdr.l3.flags & QETH_HDR_IPV6)? ETH_P_IPV6 : + ETH_P_IP); + switch (hdr->hdr.l3.flags & QETH_HDR_CAST_MASK) { + case QETH_CAST_MULTICAST: + switch (prot) { +#ifdef CONFIG_QETH_IPV6 + case __constant_htons(ETH_P_IPV6): + ndisc_mc_map((struct in6_addr *) + skb->data + 24, + tg_addr, card->dev, 0); + break; +#endif + case __constant_htons(ETH_P_IP): + ip_hdr = (struct iphdr *)skb->data; + (card->dev->type == ARPHRD_IEEE802_TR) ? + ip_tr_mc_map(ip_hdr->daddr, tg_addr): + ip_eth_mc_map(ip_hdr->daddr, tg_addr); + break; + default: + memcpy(tg_addr, card->dev->broadcast, + card->dev->addr_len); + } + card->stats.multicast++; + skb->pkt_type = PACKET_MULTICAST; + break; + case QETH_CAST_BROADCAST: + memcpy(tg_addr, card->dev->broadcast, + card->dev->addr_len); + card->stats.multicast++; + skb->pkt_type = PACKET_BROADCAST; + break; + case QETH_CAST_UNICAST: + case QETH_CAST_ANYCAST: + case QETH_CAST_NOCAST: + default: + skb->pkt_type = PACKET_HOST; + memcpy(tg_addr, card->dev->dev_addr, + card->dev->addr_len); + } + card->dev->header_ops->create(skb, card->dev, prot, tg_addr, + "FAKELL", card->dev->addr_len); + } + +#ifdef CONFIG_TR + if (card->dev->type == ARPHRD_IEEE802_TR) + skb->protocol = tr_type_trans(skb, card->dev); + else +#endif + skb->protocol = eth_type_trans(skb, card->dev); + + if (hdr->hdr.l3.ext_flags & + (QETH_HDR_EXT_VLAN_FRAME | QETH_HDR_EXT_INCLUDE_VLAN_TAG)) { + vlan_id = (hdr->hdr.l3.ext_flags & QETH_HDR_EXT_VLAN_FRAME)? + hdr->hdr.l3.vlan_id : *((u16 *)&hdr->hdr.l3.dest_addr[12]); + } + + skb->ip_summed = card->options.checksum_type; + if (card->options.checksum_type == HW_CHECKSUMMING) { + if ((hdr->hdr.l3.ext_flags & + (QETH_HDR_EXT_CSUM_HDR_REQ | + QETH_HDR_EXT_CSUM_TRANSP_REQ)) == + (QETH_HDR_EXT_CSUM_HDR_REQ | + QETH_HDR_EXT_CSUM_TRANSP_REQ)) + skb->ip_summed = CHECKSUM_UNNECESSARY; + else + skb->ip_summed = SW_CHECKSUMMING; + } + + return vlan_id; +} + +static void qeth_l3_process_inbound_buffer(struct qeth_card *card, + struct qeth_qdio_buffer *buf, int index) +{ + struct qdio_buffer_element *element; + struct sk_buff *skb; + struct qeth_hdr *hdr; + int offset; + __u16 vlan_tag = 0; + unsigned int len; + + /* get first element of current buffer */ + element = (struct qdio_buffer_element *)&buf->buffer->element[0]; + offset = 0; + if (card->options.performance_stats) + card->perf_stats.bufs_rec++; + while ((skb = qeth_core_get_next_skb(card, buf->buffer, &element, + &offset, &hdr))) { + skb->dev = card->dev; + /* is device UP ? */ + if (!(card->dev->flags & IFF_UP)) { + dev_kfree_skb_any(skb); + continue; + } + + switch (hdr->hdr.l3.id) { + case QETH_HEADER_TYPE_LAYER3: + vlan_tag = qeth_l3_rebuild_skb(card, skb, hdr); + len = skb->len; + if (vlan_tag) + if (card->vlangrp) + vlan_hwaccel_rx(skb, card->vlangrp, + vlan_tag); + else { + dev_kfree_skb_any(skb); + continue; + } + else + netif_rx(skb); + break; + default: + dev_kfree_skb_any(skb); + QETH_DBF_TEXT(trace, 3, "inbunkno"); + QETH_DBF_HEX(control, 3, hdr, QETH_DBF_CONTROL_LEN); + continue; + } + + card->dev->last_rx = jiffies; + card->stats.rx_packets++; + card->stats.rx_bytes += len; + } +} + +static int qeth_l3_verify_vlan_dev(struct net_device *dev, + struct qeth_card *card) +{ + int rc = 0; + struct vlan_group *vg; + int i; + + vg = card->vlangrp; + if (!vg) + return rc; + + for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { + if (vlan_group_get_device(vg, i) == dev) { + rc = QETH_VLAN_CARD; + break; + } + } + + if (rc && !(netdev_priv(vlan_dev_info(dev)->real_dev) == (void *)card)) + return 0; + + return rc; +} + +static int qeth_l3_verify_dev(struct net_device *dev) +{ + struct qeth_card *card; + unsigned long flags; + int rc = 0; + + read_lock_irqsave(&qeth_core_card_list.rwlock, flags); + list_for_each_entry(card, &qeth_core_card_list.list, list) { + if (card->dev == dev) { + rc = QETH_REAL_CARD; + break; + } + rc = qeth_l3_verify_vlan_dev(dev, card); + if (rc) + break; + } + read_unlock_irqrestore(&qeth_core_card_list.rwlock, flags); + + return rc; +} + +static struct qeth_card *qeth_l3_get_card_from_dev(struct net_device *dev) +{ + struct qeth_card *card = NULL; + int rc; + + rc = qeth_l3_verify_dev(dev); + if (rc == QETH_REAL_CARD) + card = netdev_priv(dev); + else if (rc == QETH_VLAN_CARD) + card = netdev_priv(vlan_dev_info(dev)->real_dev); + if (card->options.layer2) + card = NULL; + QETH_DBF_TEXT_(trace, 4, "%d", rc); + return card ; +} + +static int qeth_l3_stop_card(struct qeth_card *card, int recovery_mode) +{ + int rc = 0; + + QETH_DBF_TEXT(setup, 2, "stopcard"); + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + + qeth_set_allowed_threads(card, 0, 1); + if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) + return -ERESTARTSYS; + if (card->read.state == CH_STATE_UP && + card->write.state == CH_STATE_UP && + (card->state == CARD_STATE_UP)) { + if (recovery_mode) + qeth_l3_stop(card->dev); + if (!card->use_hard_stop) { + rc = qeth_send_stoplan(card); + if (rc) + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + } + card->state = CARD_STATE_SOFTSETUP; + } + if (card->state == CARD_STATE_SOFTSETUP) { + qeth_l3_clear_ip_list(card, !card->use_hard_stop, 1); + qeth_clear_ipacmd_list(card); + card->state = CARD_STATE_HARDSETUP; + } + if (card->state == CARD_STATE_HARDSETUP) { + if (!card->use_hard_stop && + (card->info.type != QETH_CARD_TYPE_IQD)) { + rc = qeth_l3_put_unique_id(card); + if (rc) + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + } + qeth_qdio_clear_card(card, 0); + qeth_clear_qdio_buffers(card); + qeth_clear_working_pool_list(card); + card->state = CARD_STATE_DOWN; + } + if (card->state == CARD_STATE_DOWN) { + qeth_clear_cmd_buffers(&card->read); + qeth_clear_cmd_buffers(&card->write); + } + card->use_hard_stop = 0; + return rc; +} + +static void qeth_l3_set_multicast_list(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 3, "setmulti"); + qeth_l3_delete_mc_addresses(card); + qeth_l3_add_multicast_ipv4(card); +#ifdef CONFIG_QETH_IPV6 + qeth_l3_add_multicast_ipv6(card); +#endif + qeth_l3_set_ip_addr_list(card); + if (!qeth_adp_supported(card, IPA_SETADP_SET_PROMISC_MODE)) + return; + qeth_setadp_promisc_mode(card); +} + +static const char *qeth_l3_arp_get_error_cause(int *rc) +{ + switch (*rc) { + case QETH_IPA_ARP_RC_FAILED: + *rc = -EIO; + return "operation failed"; + case QETH_IPA_ARP_RC_NOTSUPP: + *rc = -EOPNOTSUPP; + return "operation not supported"; + case QETH_IPA_ARP_RC_OUT_OF_RANGE: + *rc = -EINVAL; + return "argument out of range"; + case QETH_IPA_ARP_RC_Q_NOTSUPP: + *rc = -EOPNOTSUPP; + return "query operation not supported"; + case QETH_IPA_ARP_RC_Q_NO_DATA: + *rc = -ENOENT; + return "no query data available"; + default: + return "unknown error"; + } +} + +static int qeth_l3_arp_set_no_entries(struct qeth_card *card, int no_entries) +{ + int tmp; + int rc; + + QETH_DBF_TEXT(trace, 3, "arpstnoe"); + + /* + * currently GuestLAN only supports the ARP assist function + * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_SET_NO_ENTRIES; + * thus we say EOPNOTSUPP for this ARP function + */ + if (card->info.guestlan) + return -EOPNOTSUPP; + if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { + PRINT_WARN("ARP processing not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + rc = qeth_l3_send_simple_setassparms(card, IPA_ARP_PROCESSING, + IPA_CMD_ASS_ARP_SET_NO_ENTRIES, + no_entries); + if (rc) { + tmp = rc; + PRINT_WARN("Could not set number of ARP entries on %s: " + "%s (0x%x/%d)\n", QETH_CARD_IFNAME(card), + qeth_l3_arp_get_error_cause(&rc), tmp, tmp); + } + return rc; +} + +static void qeth_l3_copy_arp_entries_stripped(struct qeth_arp_query_info *qinfo, + struct qeth_arp_query_data *qdata, int entry_size, + int uentry_size) +{ + char *entry_ptr; + char *uentry_ptr; + int i; + + entry_ptr = (char *)&qdata->data; + uentry_ptr = (char *)(qinfo->udata + qinfo->udata_offset); + for (i = 0; i < qdata->no_entries; ++i) { + /* strip off 32 bytes "media specific information" */ + memcpy(uentry_ptr, (entry_ptr + 32), entry_size - 32); + entry_ptr += entry_size; + uentry_ptr += uentry_size; + } +} + +static int qeth_l3_arp_query_cb(struct qeth_card *card, + struct qeth_reply *reply, unsigned long data) +{ + struct qeth_ipa_cmd *cmd; + struct qeth_arp_query_data *qdata; + struct qeth_arp_query_info *qinfo; + int entry_size; + int uentry_size; + int i; + + QETH_DBF_TEXT(trace, 4, "arpquecb"); + + qinfo = (struct qeth_arp_query_info *) reply->param; + cmd = (struct qeth_ipa_cmd *) data; + if (cmd->hdr.return_code) { + QETH_DBF_TEXT_(trace, 4, "qaer1%i", cmd->hdr.return_code); + return 0; + } + if (cmd->data.setassparms.hdr.return_code) { + cmd->hdr.return_code = cmd->data.setassparms.hdr.return_code; + QETH_DBF_TEXT_(trace, 4, "qaer2%i", cmd->hdr.return_code); + return 0; + } + qdata = &cmd->data.setassparms.data.query_arp; + switch (qdata->reply_bits) { + case 5: + uentry_size = entry_size = sizeof(struct qeth_arp_qi_entry5); + if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) + uentry_size = sizeof(struct qeth_arp_qi_entry5_short); + break; + case 7: + /* fall through to default */ + default: + /* tr is the same as eth -> entry7 */ + uentry_size = entry_size = sizeof(struct qeth_arp_qi_entry7); + if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) + uentry_size = sizeof(struct qeth_arp_qi_entry7_short); + break; + } + /* check if there is enough room in userspace */ + if ((qinfo->udata_len - qinfo->udata_offset) < + qdata->no_entries * uentry_size){ + QETH_DBF_TEXT_(trace, 4, "qaer3%i", -ENOMEM); + cmd->hdr.return_code = -ENOMEM; + PRINT_WARN("query ARP user space buffer is too small for " + "the returned number of ARP entries. " + "Aborting query!\n"); + goto out_error; + } + QETH_DBF_TEXT_(trace, 4, "anore%i", + cmd->data.setassparms.hdr.number_of_replies); + QETH_DBF_TEXT_(trace, 4, "aseqn%i", cmd->data.setassparms.hdr.seq_no); + QETH_DBF_TEXT_(trace, 4, "anoen%i", qdata->no_entries); + + if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) { + /* strip off "media specific information" */ + qeth_l3_copy_arp_entries_stripped(qinfo, qdata, entry_size, + uentry_size); + } else + /*copy entries to user buffer*/ + memcpy(qinfo->udata + qinfo->udata_offset, + (char *)&qdata->data, qdata->no_entries*uentry_size); + + qinfo->no_entries += qdata->no_entries; + qinfo->udata_offset += (qdata->no_entries*uentry_size); + /* check if all replies received ... */ + if (cmd->data.setassparms.hdr.seq_no < + cmd->data.setassparms.hdr.number_of_replies) + return 1; + memcpy(qinfo->udata, &qinfo->no_entries, 4); + /* keep STRIP_ENTRIES flag so the user program can distinguish + * stripped entries from normal ones */ + if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) + qdata->reply_bits |= QETH_QARP_STRIP_ENTRIES; + memcpy(qinfo->udata + QETH_QARP_MASK_OFFSET, &qdata->reply_bits, 2); + return 0; +out_error: + i = 0; + memcpy(qinfo->udata, &i, 4); + return 0; +} + +static int qeth_l3_send_ipa_arp_cmd(struct qeth_card *card, + struct qeth_cmd_buffer *iob, int len, + int (*reply_cb)(struct qeth_card *, struct qeth_reply *, + unsigned long), + void *reply_param) +{ + QETH_DBF_TEXT(trace, 4, "sendarp"); + + memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); + memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), + &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); + return qeth_send_control_data(card, IPA_PDU_HEADER_SIZE + len, iob, + reply_cb, reply_param); +} + +static int qeth_l3_arp_query(struct qeth_card *card, char __user *udata) +{ + struct qeth_cmd_buffer *iob; + struct qeth_arp_query_info qinfo = {0, }; + int tmp; + int rc; + + QETH_DBF_TEXT(trace, 3, "arpquery"); + + if (!qeth_is_supported(card,/*IPA_QUERY_ARP_ADDR_INFO*/ + IPA_ARP_PROCESSING)) { + PRINT_WARN("ARP processing not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + /* get size of userspace buffer and mask_bits -> 6 bytes */ + if (copy_from_user(&qinfo, udata, 6)) + return -EFAULT; + qinfo.udata = kzalloc(qinfo.udata_len, GFP_KERNEL); + if (!qinfo.udata) + return -ENOMEM; + qinfo.udata_offset = QETH_QARP_ENTRIES_OFFSET; + iob = qeth_l3_get_setassparms_cmd(card, IPA_ARP_PROCESSING, + IPA_CMD_ASS_ARP_QUERY_INFO, + sizeof(int), QETH_PROT_IPV4); + + rc = qeth_l3_send_ipa_arp_cmd(card, iob, + QETH_SETASS_BASE_LEN+QETH_ARP_CMD_LEN, + qeth_l3_arp_query_cb, (void *)&qinfo); + if (rc) { + tmp = rc; + PRINT_WARN("Error while querying ARP cache on %s: %s " + "(0x%x/%d)\n", QETH_CARD_IFNAME(card), + qeth_l3_arp_get_error_cause(&rc), tmp, tmp); + if (copy_to_user(udata, qinfo.udata, 4)) + rc = -EFAULT; + } else { + if (copy_to_user(udata, qinfo.udata, qinfo.udata_len)) + rc = -EFAULT; + } + kfree(qinfo.udata); + return rc; +} + +static int qeth_l3_arp_add_entry(struct qeth_card *card, + struct qeth_arp_cache_entry *entry) +{ + struct qeth_cmd_buffer *iob; + char buf[16]; + int tmp; + int rc; + + QETH_DBF_TEXT(trace, 3, "arpadent"); + + /* + * currently GuestLAN only supports the ARP assist function + * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_ADD_ENTRY; + * thus we say EOPNOTSUPP for this ARP function + */ + if (card->info.guestlan) + return -EOPNOTSUPP; + if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { + PRINT_WARN("ARP processing not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + + iob = qeth_l3_get_setassparms_cmd(card, IPA_ARP_PROCESSING, + IPA_CMD_ASS_ARP_ADD_ENTRY, + sizeof(struct qeth_arp_cache_entry), + QETH_PROT_IPV4); + rc = qeth_l3_send_setassparms(card, iob, + sizeof(struct qeth_arp_cache_entry), + (unsigned long) entry, + qeth_l3_default_setassparms_cb, NULL); + if (rc) { + tmp = rc; + qeth_l3_ipaddr4_to_string((u8 *)entry->ipaddr, buf); + PRINT_WARN("Could not add ARP entry for address %s on %s: " + "%s (0x%x/%d)\n", + buf, QETH_CARD_IFNAME(card), + qeth_l3_arp_get_error_cause(&rc), tmp, tmp); + } + return rc; +} + +static int qeth_l3_arp_remove_entry(struct qeth_card *card, + struct qeth_arp_cache_entry *entry) +{ + struct qeth_cmd_buffer *iob; + char buf[16] = {0, }; + int tmp; + int rc; + + QETH_DBF_TEXT(trace, 3, "arprment"); + + /* + * currently GuestLAN only supports the ARP assist function + * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_REMOVE_ENTRY; + * thus we say EOPNOTSUPP for this ARP function + */ + if (card->info.guestlan) + return -EOPNOTSUPP; + if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { + PRINT_WARN("ARP processing not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + memcpy(buf, entry, 12); + iob = qeth_l3_get_setassparms_cmd(card, IPA_ARP_PROCESSING, + IPA_CMD_ASS_ARP_REMOVE_ENTRY, + 12, + QETH_PROT_IPV4); + rc = qeth_l3_send_setassparms(card, iob, + 12, (unsigned long)buf, + qeth_l3_default_setassparms_cb, NULL); + if (rc) { + tmp = rc; + memset(buf, 0, 16); + qeth_l3_ipaddr4_to_string((u8 *)entry->ipaddr, buf); + PRINT_WARN("Could not delete ARP entry for address %s on %s: " + "%s (0x%x/%d)\n", + buf, QETH_CARD_IFNAME(card), + qeth_l3_arp_get_error_cause(&rc), tmp, tmp); + } + return rc; +} + +static int qeth_l3_arp_flush_cache(struct qeth_card *card) +{ + int rc; + int tmp; + + QETH_DBF_TEXT(trace, 3, "arpflush"); + + /* + * currently GuestLAN only supports the ARP assist function + * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_FLUSH_CACHE; + * thus we say EOPNOTSUPP for this ARP function + */ + if (card->info.guestlan || (card->info.type == QETH_CARD_TYPE_IQD)) + return -EOPNOTSUPP; + if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { + PRINT_WARN("ARP processing not supported " + "on %s!\n", QETH_CARD_IFNAME(card)); + return -EOPNOTSUPP; + } + rc = qeth_l3_send_simple_setassparms(card, IPA_ARP_PROCESSING, + IPA_CMD_ASS_ARP_FLUSH_CACHE, 0); + if (rc) { + tmp = rc; + PRINT_WARN("Could not flush ARP cache on %s: %s (0x%x/%d)\n", + QETH_CARD_IFNAME(card), + qeth_l3_arp_get_error_cause(&rc), tmp, tmp); + } + return rc; +} + +static int qeth_l3_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) +{ + struct qeth_card *card = netdev_priv(dev); + struct qeth_arp_cache_entry arp_entry; + struct mii_ioctl_data *mii_data; + int rc = 0; + + if (!card) + return -ENODEV; + + if ((card->state != CARD_STATE_UP) && + (card->state != CARD_STATE_SOFTSETUP)) + return -ENODEV; + + switch (cmd) { + case SIOC_QETH_ARP_SET_NO_ENTRIES: + if (!capable(CAP_NET_ADMIN)) { + rc = -EPERM; + break; + } + rc = qeth_l3_arp_set_no_entries(card, rq->ifr_ifru.ifru_ivalue); + break; + case SIOC_QETH_ARP_QUERY_INFO: + if (!capable(CAP_NET_ADMIN)) { + rc = -EPERM; + break; + } + rc = qeth_l3_arp_query(card, rq->ifr_ifru.ifru_data); + break; + case SIOC_QETH_ARP_ADD_ENTRY: + if (!capable(CAP_NET_ADMIN)) { + rc = -EPERM; + break; + } + if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data, + sizeof(struct qeth_arp_cache_entry))) + rc = -EFAULT; + else + rc = qeth_l3_arp_add_entry(card, &arp_entry); + break; + case SIOC_QETH_ARP_REMOVE_ENTRY: + if (!capable(CAP_NET_ADMIN)) { + rc = -EPERM; + break; + } + if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data, + sizeof(struct qeth_arp_cache_entry))) + rc = -EFAULT; + else + rc = qeth_l3_arp_remove_entry(card, &arp_entry); + break; + case SIOC_QETH_ARP_FLUSH_CACHE: + if (!capable(CAP_NET_ADMIN)) { + rc = -EPERM; + break; + } + rc = qeth_l3_arp_flush_cache(card); + break; + case SIOC_QETH_ADP_SET_SNMP_CONTROL: + rc = qeth_snmp_command(card, rq->ifr_ifru.ifru_data); + break; + case SIOC_QETH_GET_CARD_TYPE: + if ((card->info.type == QETH_CARD_TYPE_OSAE) && + !card->info.guestlan) + return 1; + return 0; + break; + case SIOCGMIIPHY: + mii_data = if_mii(rq); + mii_data->phy_id = 0; + break; + case SIOCGMIIREG: + mii_data = if_mii(rq); + if (mii_data->phy_id != 0) + rc = -EINVAL; + else + mii_data->val_out = qeth_mdio_read(dev, + mii_data->phy_id, + mii_data->reg_num); + break; + default: + rc = -EOPNOTSUPP; + } + if (rc) + QETH_DBF_TEXT_(trace, 2, "ioce%d", rc); + return rc; +} + +static void qeth_l3_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, + struct sk_buff *skb, int ipv, int cast_type) +{ + QETH_DBF_TEXT(trace, 6, "fillhdr"); + + memset(hdr, 0, sizeof(struct qeth_hdr)); + hdr->hdr.l3.id = QETH_HEADER_TYPE_LAYER3; + hdr->hdr.l3.ext_flags = 0; + + /* + * before we're going to overwrite this location with next hop ip. + * v6 uses passthrough, v4 sets the tag in the QDIO header. + */ + if (card->vlangrp && vlan_tx_tag_present(skb)) { + hdr->hdr.l3.ext_flags = (ipv == 4) ? + QETH_HDR_EXT_VLAN_FRAME : + QETH_HDR_EXT_INCLUDE_VLAN_TAG; + hdr->hdr.l3.vlan_id = vlan_tx_tag_get(skb); + } + + hdr->hdr.l3.length = skb->len - sizeof(struct qeth_hdr); + if (ipv == 4) { + /* IPv4 */ + hdr->hdr.l3.flags = qeth_l3_get_qeth_hdr_flags4(cast_type); + memset(hdr->hdr.l3.dest_addr, 0, 12); + if ((skb->dst) && (skb->dst->neighbour)) { + *((u32 *) (&hdr->hdr.l3.dest_addr[12])) = + *((u32 *) skb->dst->neighbour->primary_key); + } else { + /* fill in destination address used in ip header */ + *((u32 *) (&hdr->hdr.l3.dest_addr[12])) = + ip_hdr(skb)->daddr; + } + } else if (ipv == 6) { + /* IPv6 */ + hdr->hdr.l3.flags = qeth_l3_get_qeth_hdr_flags6(cast_type); + if (card->info.type == QETH_CARD_TYPE_IQD) + hdr->hdr.l3.flags &= ~QETH_HDR_PASSTHRU; + if ((skb->dst) && (skb->dst->neighbour)) { + memcpy(hdr->hdr.l3.dest_addr, + skb->dst->neighbour->primary_key, 16); + } else { + /* fill in destination address used in ip header */ + memcpy(hdr->hdr.l3.dest_addr, + &ipv6_hdr(skb)->daddr, 16); + } + } else { + /* passthrough */ + if ((skb->dev->type == ARPHRD_IEEE802_TR) && + !memcmp(skb->data + sizeof(struct qeth_hdr) + + sizeof(__u16), skb->dev->broadcast, 6)) { + hdr->hdr.l3.flags = QETH_CAST_BROADCAST | + QETH_HDR_PASSTHRU; + } else if (!memcmp(skb->data + sizeof(struct qeth_hdr), + skb->dev->broadcast, 6)) { + /* broadcast? */ + hdr->hdr.l3.flags = QETH_CAST_BROADCAST | + QETH_HDR_PASSTHRU; + } else { + hdr->hdr.l3.flags = (cast_type == RTN_MULTICAST) ? + QETH_CAST_MULTICAST | QETH_HDR_PASSTHRU : + QETH_CAST_UNICAST | QETH_HDR_PASSTHRU; + } + } +} + +static int qeth_l3_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + int rc; + u16 *tag; + struct qeth_hdr *hdr = NULL; + int elements_needed = 0; + struct qeth_card *card = netdev_priv(dev); + struct sk_buff *new_skb = NULL; + int ipv = qeth_get_ip_version(skb); + int cast_type = qeth_get_cast_type(card, skb); + struct qeth_qdio_out_q *queue = card->qdio.out_qs + [qeth_get_priority_queue(card, skb, ipv, cast_type)]; + int tx_bytes = skb->len; + enum qeth_large_send_types large_send = QETH_LARGE_SEND_NO; + struct qeth_eddp_context *ctx = NULL; + + QETH_DBF_TEXT(trace, 6, "l3xmit"); + + if ((card->info.type == QETH_CARD_TYPE_IQD) && + (skb->protocol != htons(ETH_P_IPV6)) && + (skb->protocol != htons(ETH_P_IP))) + goto tx_drop; + + if ((card->state != CARD_STATE_UP) || !card->lan_online) { + card->stats.tx_carrier_errors++; + goto tx_drop; + } + + if ((cast_type == RTN_BROADCAST) && + (card->info.broadcast_capable == 0)) + goto tx_drop; + + if (card->options.performance_stats) { + card->perf_stats.outbound_cnt++; + card->perf_stats.outbound_start_time = qeth_get_micros(); + } + + /* create a clone with writeable headroom */ + new_skb = skb_realloc_headroom(skb, sizeof(struct qeth_hdr_tso) + + VLAN_HLEN); + if (!new_skb) + goto tx_drop; + + if (card->info.type == QETH_CARD_TYPE_IQD) { + skb_pull(new_skb, ETH_HLEN); + } else { + if (new_skb->protocol == htons(ETH_P_IP)) { + if (card->dev->type == ARPHRD_IEEE802_TR) + skb_pull(new_skb, TR_HLEN); + else + skb_pull(new_skb, ETH_HLEN); + } + + if (new_skb->protocol == ETH_P_IPV6 && card->vlangrp && + vlan_tx_tag_present(new_skb)) { + skb_push(new_skb, VLAN_HLEN); + skb_copy_to_linear_data(new_skb, new_skb->data + 4, 4); + skb_copy_to_linear_data_offset(new_skb, 4, + new_skb->data + 8, 4); + skb_copy_to_linear_data_offset(new_skb, 8, + new_skb->data + 12, 4); + tag = (u16 *)(new_skb->data + 12); + *tag = __constant_htons(ETH_P_8021Q); + *(tag + 1) = htons(vlan_tx_tag_get(new_skb)); + VLAN_TX_SKB_CB(new_skb)->magic = 0; + } + } + + netif_stop_queue(dev); + + if (skb_is_gso(new_skb)) + large_send = card->options.large_send; + + /* fix hardware limitation: as long as we do not have sbal + * chaining we can not send long frag lists so we temporary + * switch to EDDP + */ + if ((large_send == QETH_LARGE_SEND_TSO) && + ((skb_shinfo(new_skb)->nr_frags + 2) > 16)) + large_send = QETH_LARGE_SEND_EDDP; + + if ((large_send == QETH_LARGE_SEND_TSO) && + (cast_type == RTN_UNSPEC)) { + hdr = (struct qeth_hdr *)skb_push(new_skb, + sizeof(struct qeth_hdr_tso)); + memset(hdr, 0, sizeof(struct qeth_hdr_tso)); + qeth_l3_fill_header(card, hdr, new_skb, ipv, cast_type); + qeth_tso_fill_header(card, hdr, new_skb); + elements_needed++; + } else { + hdr = (struct qeth_hdr *)skb_push(new_skb, + sizeof(struct qeth_hdr)); + qeth_l3_fill_header(card, hdr, new_skb, ipv, cast_type); + } + + if (large_send == QETH_LARGE_SEND_EDDP) { + /* new_skb is not owned by a socket so we use skb to get + * the protocol + */ + ctx = qeth_eddp_create_context(card, new_skb, hdr, + skb->sk->sk_protocol); + if (ctx == NULL) { + PRINT_WARN("could not create eddp context\n"); + goto tx_drop; + } + } else { + int elems = qeth_get_elements_no(card, (void *)hdr, new_skb, + elements_needed); + if (!elems) + goto tx_drop; + elements_needed += elems; + } + + if ((large_send == QETH_LARGE_SEND_NO) && + (new_skb->ip_summed == CHECKSUM_PARTIAL)) + qeth_tx_csum(new_skb); + + if (card->info.type != QETH_CARD_TYPE_IQD) + rc = qeth_do_send_packet(card, queue, new_skb, hdr, + elements_needed, ctx); + else + rc = qeth_do_send_packet_fast(card, queue, new_skb, hdr, + elements_needed, ctx); + + if (!rc) { + card->stats.tx_packets++; + card->stats.tx_bytes += tx_bytes; + if (new_skb != skb) + dev_kfree_skb_any(skb); + if (card->options.performance_stats) { + if (large_send != QETH_LARGE_SEND_NO) { + card->perf_stats.large_send_bytes += tx_bytes; + card->perf_stats.large_send_cnt++; + } + if (skb_shinfo(new_skb)->nr_frags > 0) { + card->perf_stats.sg_skbs_sent++; + /* nr_frags + skb->data */ + card->perf_stats.sg_frags_sent += + skb_shinfo(new_skb)->nr_frags + 1; + } + } + + if (ctx != NULL) { + qeth_eddp_put_context(ctx); + dev_kfree_skb_any(new_skb); + } + } else { + if (ctx != NULL) + qeth_eddp_put_context(ctx); + + if (rc == -EBUSY) { + if (new_skb != skb) + dev_kfree_skb_any(new_skb); + return NETDEV_TX_BUSY; + } else + goto tx_drop; + } + + netif_wake_queue(dev); + if (card->options.performance_stats) + card->perf_stats.outbound_time += qeth_get_micros() - + card->perf_stats.outbound_start_time; + return rc; + +tx_drop: + card->stats.tx_dropped++; + card->stats.tx_errors++; + if ((new_skb != skb) && new_skb) + dev_kfree_skb_any(new_skb); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +static int qeth_l3_open(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 4, "qethopen"); + if (card->state != CARD_STATE_SOFTSETUP) + return -ENODEV; + card->data.state = CH_STATE_UP; + card->state = CARD_STATE_UP; + card->dev->flags |= IFF_UP; + netif_start_queue(dev); + + if (!card->lan_online && netif_carrier_ok(dev)) + netif_carrier_off(dev); + return 0; +} + +static int qeth_l3_stop(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + + QETH_DBF_TEXT(trace, 4, "qethstop"); + netif_tx_disable(dev); + card->dev->flags &= ~IFF_UP; + if (card->state == CARD_STATE_UP) + card->state = CARD_STATE_SOFTSETUP; + return 0; +} + +static u32 qeth_l3_ethtool_get_rx_csum(struct net_device *dev) +{ + struct qeth_card *card = netdev_priv(dev); + + return (card->options.checksum_type == HW_CHECKSUMMING); +} + +static int qeth_l3_ethtool_set_rx_csum(struct net_device *dev, u32 data) +{ + struct qeth_card *card = netdev_priv(dev); + enum qeth_card_states old_state; + enum qeth_checksum_types csum_type; + + if ((card->state != CARD_STATE_UP) && + (card->state != CARD_STATE_DOWN)) + return -EPERM; + + if (data) + csum_type = HW_CHECKSUMMING; + else + csum_type = SW_CHECKSUMMING; + + if (card->options.checksum_type != csum_type) { + old_state = card->state; + if (card->state == CARD_STATE_UP) + __qeth_l3_set_offline(card->gdev, 1); + card->options.checksum_type = csum_type; + if (old_state == CARD_STATE_UP) + __qeth_l3_set_online(card->gdev, 1); + } + return 0; +} + +static int qeth_l3_ethtool_set_tso(struct net_device *dev, u32 data) +{ + struct qeth_card *card = netdev_priv(dev); + + if (data) { + if (card->options.large_send == QETH_LARGE_SEND_NO) { + if (card->info.type == QETH_CARD_TYPE_IQD) + card->options.large_send = QETH_LARGE_SEND_EDDP; + else + card->options.large_send = QETH_LARGE_SEND_TSO; + dev->features |= NETIF_F_TSO; + } + } else { + dev->features &= ~NETIF_F_TSO; + card->options.large_send = QETH_LARGE_SEND_NO; + } + return 0; +} + +static struct ethtool_ops qeth_l3_ethtool_ops = { + .get_link = ethtool_op_get_link, + .get_tx_csum = ethtool_op_get_tx_csum, + .set_tx_csum = ethtool_op_set_tx_hw_csum, + .get_rx_csum = qeth_l3_ethtool_get_rx_csum, + .set_rx_csum = qeth_l3_ethtool_set_rx_csum, + .get_sg = ethtool_op_get_sg, + .set_sg = ethtool_op_set_sg, + .get_tso = ethtool_op_get_tso, + .set_tso = qeth_l3_ethtool_set_tso, + .get_strings = qeth_core_get_strings, + .get_ethtool_stats = qeth_core_get_ethtool_stats, + .get_stats_count = qeth_core_get_stats_count, + .get_drvinfo = qeth_core_get_drvinfo, +}; + +/* + * we need NOARP for IPv4 but we want neighbor solicitation for IPv6. Setting + * NOARP on the netdevice is no option because it also turns off neighbor + * solicitation. For IPv4 we install a neighbor_setup function. We don't want + * arp resolution but we want the hard header (packet socket will work + * e.g. tcpdump) + */ +static int qeth_l3_neigh_setup_noarp(struct neighbour *n) +{ + n->nud_state = NUD_NOARP; + memcpy(n->ha, "FAKELL", 6); + n->output = n->ops->connected_output; + return 0; +} + +static int +qeth_l3_neigh_setup(struct net_device *dev, struct neigh_parms *np) +{ + if (np->tbl->family == AF_INET) + np->neigh_setup = qeth_l3_neigh_setup_noarp; + + return 0; +} + +static int qeth_l3_setup_netdev(struct qeth_card *card) +{ + if (card->info.type == QETH_CARD_TYPE_OSAE) { + if ((card->info.link_type == QETH_LINK_TYPE_LANE_TR) || + (card->info.link_type == QETH_LINK_TYPE_HSTR)) { +#ifdef CONFIG_TR + card->dev = alloc_trdev(0); +#endif + if (!card->dev) + return -ENODEV; + } else { + card->dev = alloc_etherdev(0); + if (!card->dev) + return -ENODEV; + card->dev->neigh_setup = qeth_l3_neigh_setup; + + /*IPv6 address autoconfiguration stuff*/ + qeth_l3_get_unique_id(card); + if (!(card->info.unique_id & UNIQUE_ID_NOT_BY_CARD)) + card->dev->dev_id = card->info.unique_id & + 0xffff; + } + } else if (card->info.type == QETH_CARD_TYPE_IQD) { + card->dev = alloc_netdev(0, "hsi%d", ether_setup); + if (!card->dev) + return -ENODEV; + card->dev->flags |= IFF_NOARP; + qeth_l3_iqd_read_initial_mac(card); + } else + return -ENODEV; + + card->dev->hard_start_xmit = qeth_l3_hard_start_xmit; + card->dev->priv = card; + card->dev->tx_timeout = &qeth_tx_timeout; + card->dev->watchdog_timeo = QETH_TX_TIMEOUT; + card->dev->open = qeth_l3_open; + card->dev->stop = qeth_l3_stop; + card->dev->do_ioctl = qeth_l3_do_ioctl; + card->dev->get_stats = qeth_get_stats; + card->dev->change_mtu = qeth_change_mtu; + card->dev->set_multicast_list = qeth_l3_set_multicast_list; + card->dev->vlan_rx_register = qeth_l3_vlan_rx_register; + card->dev->vlan_rx_add_vid = qeth_l3_vlan_rx_add_vid; + card->dev->vlan_rx_kill_vid = qeth_l3_vlan_rx_kill_vid; + card->dev->mtu = card->info.initial_mtu; + SET_ETHTOOL_OPS(card->dev, &qeth_l3_ethtool_ops); + card->dev->features |= NETIF_F_HW_VLAN_TX | + NETIF_F_HW_VLAN_RX | + NETIF_F_HW_VLAN_FILTER; + + SET_NETDEV_DEV(card->dev, &card->gdev->dev); + return register_netdev(card->dev); +} + +static void qeth_l3_qdio_input_handler(struct ccw_device *ccwdev, + unsigned int status, unsigned int qdio_err, + unsigned int siga_err, unsigned int queue, int first_element, + int count, unsigned long card_ptr) +{ + struct net_device *net_dev; + struct qeth_card *card; + struct qeth_qdio_buffer *buffer; + int index; + int i; + + QETH_DBF_TEXT(trace, 6, "qdinput"); + card = (struct qeth_card *) card_ptr; + net_dev = card->dev; + if (card->options.performance_stats) { + card->perf_stats.inbound_cnt++; + card->perf_stats.inbound_start_time = qeth_get_micros(); + } + if (status & QDIO_STATUS_LOOK_FOR_ERROR) { + if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION) { + QETH_DBF_TEXT(trace, 1, "qdinchk"); + QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(trace, 1, "%04X%04X", + first_element, count); + QETH_DBF_TEXT_(trace, 1, "%04X%04X", queue, status); + qeth_schedule_recovery(card); + return; + } + } + for (i = first_element; i < (first_element + count); ++i) { + index = i % QDIO_MAX_BUFFERS_PER_Q; + buffer = &card->qdio.in_q->bufs[index]; + if (!((status & QDIO_STATUS_LOOK_FOR_ERROR) && + qeth_check_qdio_errors(buffer->buffer, + qdio_err, siga_err, "qinerr"))) + qeth_l3_process_inbound_buffer(card, buffer, index); + /* clear buffer and give back to hardware */ + qeth_put_buffer_pool_entry(card, buffer->pool_entry); + qeth_queue_input_buffer(card, index); + } + if (card->options.performance_stats) + card->perf_stats.inbound_time += qeth_get_micros() - + card->perf_stats.inbound_start_time; +} + +static int qeth_l3_probe_device(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + + qeth_l3_create_device_attributes(&gdev->dev); + card->options.layer2 = 0; + card->discipline.input_handler = (qdio_handler_t *) + qeth_l3_qdio_input_handler; + card->discipline.output_handler = (qdio_handler_t *) + qeth_qdio_output_handler; + card->discipline.recover = qeth_l3_recover; + return 0; +} + +static void qeth_l3_remove_device(struct ccwgroup_device *cgdev) +{ + struct qeth_card *card = dev_get_drvdata(&cgdev->dev); + + wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0); + + if (cgdev->state == CCWGROUP_ONLINE) { + card->use_hard_stop = 1; + qeth_l3_set_offline(cgdev); + } + + if (card->dev) { + unregister_netdev(card->dev); + card->dev = NULL; + } + + qeth_l3_remove_device_attributes(&cgdev->dev); + qeth_l3_clear_ip_list(card, 0, 0); + qeth_l3_clear_ipato_list(card); + return; +} + +static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + int rc = 0; + enum qeth_card_states recover_flag; + + BUG_ON(!card); + QETH_DBF_TEXT(setup, 2, "setonlin"); + QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + + qeth_set_allowed_threads(card, QETH_RECOVER_THREAD, 1); + if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) { + PRINT_WARN("set_online of card %s interrupted by user!\n", + CARD_BUS_ID(card)); + return -ERESTARTSYS; + } + + recover_flag = card->state; + rc = ccw_device_set_online(CARD_RDEV(card)); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return -EIO; + } + rc = ccw_device_set_online(CARD_WDEV(card)); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return -EIO; + } + rc = ccw_device_set_online(CARD_DDEV(card)); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + return -EIO; + } + + rc = qeth_core_hardsetup_card(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + goto out_remove; + } + + qeth_l3_query_ipassists(card, QETH_PROT_IPV4); + + if (!card->dev && qeth_l3_setup_netdev(card)) + goto out_remove; + + card->state = CARD_STATE_HARDSETUP; + qeth_print_status_message(card); + + /* softsetup */ + QETH_DBF_TEXT(setup, 2, "softsetp"); + + rc = qeth_send_startlan(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + if (rc == 0xe080) { + PRINT_WARN("LAN on card %s if offline! " + "Waiting for STARTLAN from card.\n", + CARD_BUS_ID(card)); + card->lan_online = 0; + } + return rc; + } else + card->lan_online = 1; + qeth_set_large_send(card, card->options.large_send); + + rc = qeth_l3_setadapter_parms(card); + if (rc) + QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + rc = qeth_l3_start_ipassists(card); + if (rc) + QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + rc = qeth_l3_setrouting_v4(card); + if (rc) + QETH_DBF_TEXT_(setup, 2, "4err%d", rc); + rc = qeth_l3_setrouting_v6(card); + if (rc) + QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + netif_tx_disable(card->dev); + + rc = qeth_init_qdio_queues(card); + if (rc) { + QETH_DBF_TEXT_(setup, 2, "6err%d", rc); + goto out_remove; + } + card->state = CARD_STATE_SOFTSETUP; + netif_carrier_on(card->dev); + + qeth_set_allowed_threads(card, 0xffffffff, 0); + if ((recover_flag == CARD_STATE_RECOVER) && recovery_mode) { + qeth_l3_open(card->dev); + qeth_l3_set_multicast_list(card->dev); + } + /* let user_space know that device is online */ + kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE); + return 0; +out_remove: + card->use_hard_stop = 1; + qeth_l3_stop_card(card, 0); + ccw_device_set_offline(CARD_DDEV(card)); + ccw_device_set_offline(CARD_WDEV(card)); + ccw_device_set_offline(CARD_RDEV(card)); + if (recover_flag == CARD_STATE_RECOVER) + card->state = CARD_STATE_RECOVER; + else + card->state = CARD_STATE_DOWN; + return -ENODEV; +} + +static int qeth_l3_set_online(struct ccwgroup_device *gdev) +{ + return __qeth_l3_set_online(gdev, 0); +} + +static int __qeth_l3_set_offline(struct ccwgroup_device *cgdev, + int recovery_mode) +{ + struct qeth_card *card = dev_get_drvdata(&cgdev->dev); + int rc = 0, rc2 = 0, rc3 = 0; + enum qeth_card_states recover_flag; + + QETH_DBF_TEXT(setup, 3, "setoffl"); + QETH_DBF_HEX(setup, 3, &card, sizeof(void *)); + + if (card->dev && netif_carrier_ok(card->dev)) + netif_carrier_off(card->dev); + recover_flag = card->state; + if (qeth_l3_stop_card(card, recovery_mode) == -ERESTARTSYS) { + PRINT_WARN("Stopping card %s interrupted by user!\n", + CARD_BUS_ID(card)); + return -ERESTARTSYS; + } + rc = ccw_device_set_offline(CARD_DDEV(card)); + rc2 = ccw_device_set_offline(CARD_WDEV(card)); + rc3 = ccw_device_set_offline(CARD_RDEV(card)); + if (!rc) + rc = (rc2) ? rc2 : rc3; + if (rc) + QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + if (recover_flag == CARD_STATE_UP) + card->state = CARD_STATE_RECOVER; + /* let user_space know that device is offline */ + kobject_uevent(&cgdev->dev.kobj, KOBJ_CHANGE); + return 0; +} + +static int qeth_l3_set_offline(struct ccwgroup_device *cgdev) +{ + return __qeth_l3_set_offline(cgdev, 0); +} + +static int qeth_l3_recover(void *ptr) +{ + struct qeth_card *card; + int rc = 0; + + card = (struct qeth_card *) ptr; + QETH_DBF_TEXT(trace, 2, "recover1"); + QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); + if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD)) + return 0; + QETH_DBF_TEXT(trace, 2, "recover2"); + PRINT_WARN("Recovery of device %s started ...\n", + CARD_BUS_ID(card)); + card->use_hard_stop = 1; + __qeth_l3_set_offline(card->gdev, 1); + rc = __qeth_l3_set_online(card->gdev, 1); + /* don't run another scheduled recovery */ + qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD); + qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD); + if (!rc) + PRINT_INFO("Device %s successfully recovered!\n", + CARD_BUS_ID(card)); + else + PRINT_INFO("Device %s could not be recovered!\n", + CARD_BUS_ID(card)); + return 0; +} + +static void qeth_l3_shutdown(struct ccwgroup_device *gdev) +{ + struct qeth_card *card = dev_get_drvdata(&gdev->dev); + qeth_l3_clear_ip_list(card, 0, 0); + qeth_qdio_clear_card(card, 0); + qeth_clear_qdio_buffers(card); +} + +struct ccwgroup_driver qeth_l3_ccwgroup_driver = { + .probe = qeth_l3_probe_device, + .remove = qeth_l3_remove_device, + .set_online = qeth_l3_set_online, + .set_offline = qeth_l3_set_offline, + .shutdown = qeth_l3_shutdown, +}; +EXPORT_SYMBOL_GPL(qeth_l3_ccwgroup_driver); + +static int qeth_l3_ip_event(struct notifier_block *this, + unsigned long event, void *ptr) +{ + struct in_ifaddr *ifa = (struct in_ifaddr *)ptr; + struct net_device *dev = (struct net_device *)ifa->ifa_dev->dev; + struct qeth_ipaddr *addr; + struct qeth_card *card; + + QETH_DBF_TEXT(trace, 3, "ipevent"); + card = qeth_l3_get_card_from_dev(dev); + if (!card) + return NOTIFY_DONE; + + addr = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); + if (addr != NULL) { + addr->u.a4.addr = ifa->ifa_address; + addr->u.a4.mask = ifa->ifa_mask; + addr->type = QETH_IP_TYPE_NORMAL; + } else + goto out; + + switch (event) { + case NETDEV_UP: + if (!qeth_l3_add_ip(card, addr)) + kfree(addr); + break; + case NETDEV_DOWN: + if (!qeth_l3_delete_ip(card, addr)) + kfree(addr); + break; + default: + break; + } + qeth_l3_set_ip_addr_list(card); +out: + return NOTIFY_DONE; +} + +static struct notifier_block qeth_l3_ip_notifier = { + qeth_l3_ip_event, + NULL, +}; + +#ifdef CONFIG_QETH_IPV6 +/** + * IPv6 event handler + */ +static int qeth_l3_ip6_event(struct notifier_block *this, + unsigned long event, void *ptr) +{ + struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr; + struct net_device *dev = (struct net_device *)ifa->idev->dev; + struct qeth_ipaddr *addr; + struct qeth_card *card; + + QETH_DBF_TEXT(trace, 3, "ip6event"); + + card = qeth_l3_get_card_from_dev(dev); + if (!card) + return NOTIFY_DONE; + if (!qeth_is_supported(card, IPA_IPV6)) + return NOTIFY_DONE; + + addr = qeth_l3_get_addr_buffer(QETH_PROT_IPV6); + if (addr != NULL) { + memcpy(&addr->u.a6.addr, &ifa->addr, sizeof(struct in6_addr)); + addr->u.a6.pfxlen = ifa->prefix_len; + addr->type = QETH_IP_TYPE_NORMAL; + } else + goto out; + + switch (event) { + case NETDEV_UP: + if (!qeth_l3_add_ip(card, addr)) + kfree(addr); + break; + case NETDEV_DOWN: + if (!qeth_l3_delete_ip(card, addr)) + kfree(addr); + break; + default: + break; + } + qeth_l3_set_ip_addr_list(card); +out: + return NOTIFY_DONE; +} + +static struct notifier_block qeth_l3_ip6_notifier = { + qeth_l3_ip6_event, + NULL, +}; +#endif + +static int qeth_l3_register_notifiers(void) +{ + int rc; + + QETH_DBF_TEXT(trace, 5, "regnotif"); + rc = register_inetaddr_notifier(&qeth_l3_ip_notifier); + if (rc) + return rc; +#ifdef CONFIG_QETH_IPV6 + rc = register_inet6addr_notifier(&qeth_l3_ip6_notifier); + if (rc) { + unregister_inetaddr_notifier(&qeth_l3_ip_notifier); + return rc; + } +#else + PRINT_WARN("layer 3 discipline no IPv6 support\n"); +#endif + return 0; +} + +static void qeth_l3_unregister_notifiers(void) +{ + + QETH_DBF_TEXT(trace, 5, "unregnot"); + BUG_ON(unregister_inetaddr_notifier(&qeth_l3_ip_notifier)); +#ifdef CONFIG_QETH_IPV6 + BUG_ON(unregister_inet6addr_notifier(&qeth_l3_ip6_notifier)); +#endif /* QETH_IPV6 */ +} + +static int __init qeth_l3_init(void) +{ + int rc = 0; + + PRINT_INFO("register layer 3 discipline\n"); + rc = qeth_l3_register_notifiers(); + return rc; +} + +static void __exit qeth_l3_exit(void) +{ + qeth_l3_unregister_notifiers(); + PRINT_INFO("unregister layer 3 discipline\n"); +} + +module_init(qeth_l3_init); +module_exit(qeth_l3_exit); +MODULE_AUTHOR("Frank Blaschka "); +MODULE_DESCRIPTION("qeth layer 3 discipline"); +MODULE_LICENSE("GPL"); diff --git a/drivers/s390/net/qeth_l3_sys.c b/drivers/s390/net/qeth_l3_sys.c new file mode 100644 index 000000000000..08f51fd902c4 --- /dev/null +++ b/drivers/s390/net/qeth_l3_sys.c @@ -0,0 +1,1051 @@ +/* + * drivers/s390/net/qeth_l3_sys.c + * + * Copyright IBM Corp. 2007 + * Author(s): Utz Bacher , + * Frank Pavlic , + * Thomas Spatzier , + * Frank Blaschka + */ + +#include "qeth_l3.h" + +#define QETH_DEVICE_ATTR(_id, _name, _mode, _show, _store) \ +struct device_attribute dev_attr_##_id = __ATTR(_name, _mode, _show, _store) + +static const char *qeth_l3_get_checksum_str(struct qeth_card *card) +{ + if (card->options.checksum_type == SW_CHECKSUMMING) + return "sw"; + else if (card->options.checksum_type == HW_CHECKSUMMING) + return "hw"; + else + return "no"; +} + +static ssize_t qeth_l3_dev_route_show(struct qeth_card *card, + struct qeth_routing_info *route, char *buf) +{ + switch (route->type) { + case PRIMARY_ROUTER: + return sprintf(buf, "%s\n", "primary router"); + case SECONDARY_ROUTER: + return sprintf(buf, "%s\n", "secondary router"); + case MULTICAST_ROUTER: + if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) + return sprintf(buf, "%s\n", "multicast router+"); + else + return sprintf(buf, "%s\n", "multicast router"); + case PRIMARY_CONNECTOR: + if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) + return sprintf(buf, "%s\n", "primary connector+"); + else + return sprintf(buf, "%s\n", "primary connector"); + case SECONDARY_CONNECTOR: + if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) + return sprintf(buf, "%s\n", "secondary connector+"); + else + return sprintf(buf, "%s\n", "secondary connector"); + default: + return sprintf(buf, "%s\n", "no"); + } +} + +static ssize_t qeth_l3_dev_route4_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_route_show(card, &card->options.route4, buf); +} + +static ssize_t qeth_l3_dev_route_store(struct qeth_card *card, + struct qeth_routing_info *route, enum qeth_prot_versions prot, + const char *buf, size_t count) +{ + enum qeth_routing_types old_route_type = route->type; + char *tmp; + int rc; + + tmp = strsep((char **) &buf, "\n"); + + if (!strcmp(tmp, "no_router")) { + route->type = NO_ROUTER; + } else if (!strcmp(tmp, "primary_connector")) { + route->type = PRIMARY_CONNECTOR; + } else if (!strcmp(tmp, "secondary_connector")) { + route->type = SECONDARY_CONNECTOR; + } else if (!strcmp(tmp, "primary_router")) { + route->type = PRIMARY_ROUTER; + } else if (!strcmp(tmp, "secondary_router")) { + route->type = SECONDARY_ROUTER; + } else if (!strcmp(tmp, "multicast_router")) { + route->type = MULTICAST_ROUTER; + } else { + PRINT_WARN("Invalid routing type '%s'.\n", tmp); + return -EINVAL; + } + if (((card->state == CARD_STATE_SOFTSETUP) || + (card->state == CARD_STATE_UP)) && + (old_route_type != route->type)) { + if (prot == QETH_PROT_IPV4) + rc = qeth_l3_setrouting_v4(card); + else if (prot == QETH_PROT_IPV6) + rc = qeth_l3_setrouting_v6(card); + } + return count; +} + +static ssize_t qeth_l3_dev_route4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_route_store(card, &card->options.route4, + QETH_PROT_IPV4, buf, count); +} + +static DEVICE_ATTR(route4, 0644, qeth_l3_dev_route4_show, + qeth_l3_dev_route4_store); + +static ssize_t qeth_l3_dev_route6_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + if (!qeth_is_supported(card, IPA_IPV6)) + return sprintf(buf, "%s\n", "n/a"); + + return qeth_l3_dev_route_show(card, &card->options.route6, buf); +} + +static ssize_t qeth_l3_dev_route6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + if (!qeth_is_supported(card, IPA_IPV6)) { + PRINT_WARN("IPv6 not supported for interface %s.\n" + "Routing status no changed.\n", + QETH_CARD_IFNAME(card)); + return -ENOTSUPP; + } + + return qeth_l3_dev_route_store(card, &card->options.route6, + QETH_PROT_IPV6, buf, count); +} + +static DEVICE_ATTR(route6, 0644, qeth_l3_dev_route6_show, + qeth_l3_dev_route6_store); + +static ssize_t qeth_l3_dev_fake_broadcast_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->options.fake_broadcast? 1:0); +} + +static ssize_t qeth_l3_dev_fake_broadcast_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int i; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + i = simple_strtoul(buf, &tmp, 16); + if ((i == 0) || (i == 1)) + card->options.fake_broadcast = i; + else { + PRINT_WARN("fake_broadcast: write 0 or 1 to this file!\n"); + return -EINVAL; + } + return count; +} + +static DEVICE_ATTR(fake_broadcast, 0644, qeth_l3_dev_fake_broadcast_show, + qeth_l3_dev_fake_broadcast_store); + +static ssize_t qeth_l3_dev_broadcast_mode_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || + (card->info.link_type == QETH_LINK_TYPE_LANE_TR))) + return sprintf(buf, "n/a\n"); + + return sprintf(buf, "%s\n", (card->options.broadcast_mode == + QETH_TR_BROADCAST_ALLRINGS)? + "all rings":"local"); +} + +static ssize_t qeth_l3_dev_broadcast_mode_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || + (card->info.link_type == QETH_LINK_TYPE_LANE_TR))) { + PRINT_WARN("Device is not a tokenring device!\n"); + return -EINVAL; + } + + tmp = strsep((char **) &buf, "\n"); + + if (!strcmp(tmp, "local")) { + card->options.broadcast_mode = QETH_TR_BROADCAST_LOCAL; + return count; + } else if (!strcmp(tmp, "all_rings")) { + card->options.broadcast_mode = QETH_TR_BROADCAST_ALLRINGS; + return count; + } else { + PRINT_WARN("broadcast_mode: invalid mode %s!\n", + tmp); + return -EINVAL; + } + return count; +} + +static DEVICE_ATTR(broadcast_mode, 0644, qeth_l3_dev_broadcast_mode_show, + qeth_l3_dev_broadcast_mode_store); + +static ssize_t qeth_l3_dev_canonical_macaddr_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || + (card->info.link_type == QETH_LINK_TYPE_LANE_TR))) + return sprintf(buf, "n/a\n"); + + return sprintf(buf, "%i\n", (card->options.macaddr_mode == + QETH_TR_MACADDR_CANONICAL)? 1:0); +} + +static ssize_t qeth_l3_dev_canonical_macaddr_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + int i; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || + (card->info.link_type == QETH_LINK_TYPE_LANE_TR))) { + PRINT_WARN("Device is not a tokenring device!\n"); + return -EINVAL; + } + + i = simple_strtoul(buf, &tmp, 16); + if ((i == 0) || (i == 1)) + card->options.macaddr_mode = i? + QETH_TR_MACADDR_CANONICAL : + QETH_TR_MACADDR_NONCANONICAL; + else { + PRINT_WARN("canonical_macaddr: write 0 or 1 to this file!\n"); + return -EINVAL; + } + return count; +} + +static DEVICE_ATTR(canonical_macaddr, 0644, qeth_l3_dev_canonical_macaddr_show, + qeth_l3_dev_canonical_macaddr_store); + +static ssize_t qeth_l3_dev_checksum_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%s checksumming\n", + qeth_l3_get_checksum_str(card)); +} + +static ssize_t qeth_l3_dev_checksum_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + tmp = strsep((char **) &buf, "\n"); + if (!strcmp(tmp, "sw_checksumming")) + card->options.checksum_type = SW_CHECKSUMMING; + else if (!strcmp(tmp, "hw_checksumming")) + card->options.checksum_type = HW_CHECKSUMMING; + else if (!strcmp(tmp, "no_checksumming")) + card->options.checksum_type = NO_CHECKSUMMING; + else { + PRINT_WARN("Unknown checksumming type '%s'\n", tmp); + return -EINVAL; + } + return count; +} + +static DEVICE_ATTR(checksumming, 0644, qeth_l3_dev_checksum_show, + qeth_l3_dev_checksum_store); + +static struct attribute *qeth_l3_device_attrs[] = { + &dev_attr_route4.attr, + &dev_attr_route6.attr, + &dev_attr_fake_broadcast.attr, + &dev_attr_broadcast_mode.attr, + &dev_attr_canonical_macaddr.attr, + &dev_attr_checksumming.attr, + NULL, +}; + +static struct attribute_group qeth_l3_device_attr_group = { + .attrs = qeth_l3_device_attrs, +}; + +static ssize_t qeth_l3_dev_ipato_enable_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->ipato.enabled? 1:0); +} + +static ssize_t qeth_l3_dev_ipato_enable_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + + if (!card) + return -EINVAL; + + if ((card->state != CARD_STATE_DOWN) && + (card->state != CARD_STATE_RECOVER)) + return -EPERM; + + tmp = strsep((char **) &buf, "\n"); + if (!strcmp(tmp, "toggle")) { + card->ipato.enabled = (card->ipato.enabled)? 0 : 1; + } else if (!strcmp(tmp, "1")) { + card->ipato.enabled = 1; + } else if (!strcmp(tmp, "0")) { + card->ipato.enabled = 0; + } else { + PRINT_WARN("ipato_enable: write 0, 1 or 'toggle' to " + "this file\n"); + return -EINVAL; + } + return count; +} + +static QETH_DEVICE_ATTR(ipato_enable, enable, 0644, + qeth_l3_dev_ipato_enable_show, + qeth_l3_dev_ipato_enable_store); + +static ssize_t qeth_l3_dev_ipato_invert4_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->ipato.invert4? 1:0); +} + +static ssize_t qeth_l3_dev_ipato_invert4_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + + if (!card) + return -EINVAL; + + tmp = strsep((char **) &buf, "\n"); + if (!strcmp(tmp, "toggle")) { + card->ipato.invert4 = (card->ipato.invert4)? 0 : 1; + } else if (!strcmp(tmp, "1")) { + card->ipato.invert4 = 1; + } else if (!strcmp(tmp, "0")) { + card->ipato.invert4 = 0; + } else { + PRINT_WARN("ipato_invert4: write 0, 1 or 'toggle' to " + "this file\n"); + return -EINVAL; + } + return count; +} + +static QETH_DEVICE_ATTR(ipato_invert4, invert4, 0644, + qeth_l3_dev_ipato_invert4_show, + qeth_l3_dev_ipato_invert4_store); + +static ssize_t qeth_l3_dev_ipato_add_show(char *buf, struct qeth_card *card, + enum qeth_prot_versions proto) +{ + struct qeth_ipato_entry *ipatoe; + unsigned long flags; + char addr_str[40]; + int entry_len; /* length of 1 entry string, differs between v4 and v6 */ + int i = 0; + + entry_len = (proto == QETH_PROT_IPV4)? 12 : 40; + /* add strlen for "/\n" */ + entry_len += (proto == QETH_PROT_IPV4)? 5 : 6; + spin_lock_irqsave(&card->ip_lock, flags); + list_for_each_entry(ipatoe, &card->ipato.entries, entry) { + if (ipatoe->proto != proto) + continue; + /* String must not be longer than PAGE_SIZE. So we check if + * string length gets near PAGE_SIZE. Then we can savely display + * the next IPv6 address (worst case, compared to IPv4) */ + if ((PAGE_SIZE - i) <= entry_len) + break; + qeth_l3_ipaddr_to_string(proto, ipatoe->addr, addr_str); + i += snprintf(buf + i, PAGE_SIZE - i, + "%s/%i\n", addr_str, ipatoe->mask_bits); + } + spin_unlock_irqrestore(&card->ip_lock, flags); + i += snprintf(buf + i, PAGE_SIZE - i, "\n"); + + return i; +} + +static ssize_t qeth_l3_dev_ipato_add4_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_ipato_add_show(buf, card, QETH_PROT_IPV4); +} + +static int qeth_l3_parse_ipatoe(const char *buf, enum qeth_prot_versions proto, + u8 *addr, int *mask_bits) +{ + const char *start, *end; + char *tmp; + char buffer[40] = {0, }; + + start = buf; + /* get address string */ + end = strchr(start, '/'); + if (!end || (end - start >= 40)) { + PRINT_WARN("Invalid format for ipato_addx/delx. " + "Use /\n"); + return -EINVAL; + } + strncpy(buffer, start, end - start); + if (qeth_l3_string_to_ipaddr(buffer, proto, addr)) { + PRINT_WARN("Invalid IP address format!\n"); + return -EINVAL; + } + start = end + 1; + *mask_bits = simple_strtoul(start, &tmp, 10); + if (!strlen(start) || + (tmp == start) || + (*mask_bits > ((proto == QETH_PROT_IPV4) ? 32 : 128))) { + PRINT_WARN("Invalid mask bits for ipato_addx/delx !\n"); + return -EINVAL; + } + return 0; +} + +static ssize_t qeth_l3_dev_ipato_add_store(const char *buf, size_t count, + struct qeth_card *card, enum qeth_prot_versions proto) +{ + struct qeth_ipato_entry *ipatoe; + u8 addr[16]; + int mask_bits; + int rc; + + rc = qeth_l3_parse_ipatoe(buf, proto, addr, &mask_bits); + if (rc) + return rc; + + ipatoe = kzalloc(sizeof(struct qeth_ipato_entry), GFP_KERNEL); + if (!ipatoe) { + PRINT_WARN("No memory to allocate ipato entry\n"); + return -ENOMEM; + } + ipatoe->proto = proto; + memcpy(ipatoe->addr, addr, (proto == QETH_PROT_IPV4)? 4:16); + ipatoe->mask_bits = mask_bits; + + rc = qeth_l3_add_ipato_entry(card, ipatoe); + if (rc) { + kfree(ipatoe); + return rc; + } + + return count; +} + +static ssize_t qeth_l3_dev_ipato_add4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_ipato_add_store(buf, count, card, QETH_PROT_IPV4); +} + +static QETH_DEVICE_ATTR(ipato_add4, add4, 0644, + qeth_l3_dev_ipato_add4_show, + qeth_l3_dev_ipato_add4_store); + +static ssize_t qeth_l3_dev_ipato_del_store(const char *buf, size_t count, + struct qeth_card *card, enum qeth_prot_versions proto) +{ + u8 addr[16]; + int mask_bits; + int rc; + + rc = qeth_l3_parse_ipatoe(buf, proto, addr, &mask_bits); + if (rc) + return rc; + + qeth_l3_del_ipato_entry(card, proto, addr, mask_bits); + + return count; +} + +static ssize_t qeth_l3_dev_ipato_del4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_ipato_del_store(buf, count, card, QETH_PROT_IPV4); +} + +static QETH_DEVICE_ATTR(ipato_del4, del4, 0200, NULL, + qeth_l3_dev_ipato_del4_store); + +static ssize_t qeth_l3_dev_ipato_invert6_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return sprintf(buf, "%i\n", card->ipato.invert6? 1:0); +} + +static ssize_t qeth_l3_dev_ipato_invert6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + char *tmp; + + if (!card) + return -EINVAL; + + tmp = strsep((char **) &buf, "\n"); + if (!strcmp(tmp, "toggle")) { + card->ipato.invert6 = (card->ipato.invert6)? 0 : 1; + } else if (!strcmp(tmp, "1")) { + card->ipato.invert6 = 1; + } else if (!strcmp(tmp, "0")) { + card->ipato.invert6 = 0; + } else { + PRINT_WARN("ipato_invert6: write 0, 1 or 'toggle' to " + "this file\n"); + return -EINVAL; + } + return count; +} + +static QETH_DEVICE_ATTR(ipato_invert6, invert6, 0644, + qeth_l3_dev_ipato_invert6_show, + qeth_l3_dev_ipato_invert6_store); + + +static ssize_t qeth_l3_dev_ipato_add6_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_ipato_add_show(buf, card, QETH_PROT_IPV6); +} + +static ssize_t qeth_l3_dev_ipato_add6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_ipato_add_store(buf, count, card, QETH_PROT_IPV6); +} + +static QETH_DEVICE_ATTR(ipato_add6, add6, 0644, + qeth_l3_dev_ipato_add6_show, + qeth_l3_dev_ipato_add6_store); + +static ssize_t qeth_l3_dev_ipato_del6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_ipato_del_store(buf, count, card, QETH_PROT_IPV6); +} + +static QETH_DEVICE_ATTR(ipato_del6, del6, 0200, NULL, + qeth_l3_dev_ipato_del6_store); + +static struct attribute *qeth_ipato_device_attrs[] = { + &dev_attr_ipato_enable.attr, + &dev_attr_ipato_invert4.attr, + &dev_attr_ipato_add4.attr, + &dev_attr_ipato_del4.attr, + &dev_attr_ipato_invert6.attr, + &dev_attr_ipato_add6.attr, + &dev_attr_ipato_del6.attr, + NULL, +}; + +static struct attribute_group qeth_device_ipato_group = { + .name = "ipa_takeover", + .attrs = qeth_ipato_device_attrs, +}; + +static ssize_t qeth_l3_dev_vipa_add_show(char *buf, struct qeth_card *card, + enum qeth_prot_versions proto) +{ + struct qeth_ipaddr *ipaddr; + char addr_str[40]; + int entry_len; /* length of 1 entry string, differs between v4 and v6 */ + unsigned long flags; + int i = 0; + + entry_len = (proto == QETH_PROT_IPV4)? 12 : 40; + entry_len += 2; /* \n + terminator */ + spin_lock_irqsave(&card->ip_lock, flags); + list_for_each_entry(ipaddr, &card->ip_list, entry) { + if (ipaddr->proto != proto) + continue; + if (ipaddr->type != QETH_IP_TYPE_VIPA) + continue; + /* String must not be longer than PAGE_SIZE. So we check if + * string length gets near PAGE_SIZE. Then we can savely display + * the next IPv6 address (worst case, compared to IPv4) */ + if ((PAGE_SIZE - i) <= entry_len) + break; + qeth_l3_ipaddr_to_string(proto, (const u8 *)&ipaddr->u, + addr_str); + i += snprintf(buf + i, PAGE_SIZE - i, "%s\n", addr_str); + } + spin_unlock_irqrestore(&card->ip_lock, flags); + i += snprintf(buf + i, PAGE_SIZE - i, "\n"); + + return i; +} + +static ssize_t qeth_l3_dev_vipa_add4_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_vipa_add_show(buf, card, QETH_PROT_IPV4); +} + +static int qeth_l3_parse_vipae(const char *buf, enum qeth_prot_versions proto, + u8 *addr) +{ + if (qeth_l3_string_to_ipaddr(buf, proto, addr)) { + PRINT_WARN("Invalid IP address format!\n"); + return -EINVAL; + } + return 0; +} + +static ssize_t qeth_l3_dev_vipa_add_store(const char *buf, size_t count, + struct qeth_card *card, enum qeth_prot_versions proto) +{ + u8 addr[16] = {0, }; + int rc; + + rc = qeth_l3_parse_vipae(buf, proto, addr); + if (rc) + return rc; + + rc = qeth_l3_add_vipa(card, proto, addr); + if (rc) + return rc; + + return count; +} + +static ssize_t qeth_l3_dev_vipa_add4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_vipa_add_store(buf, count, card, QETH_PROT_IPV4); +} + +static QETH_DEVICE_ATTR(vipa_add4, add4, 0644, + qeth_l3_dev_vipa_add4_show, + qeth_l3_dev_vipa_add4_store); + +static ssize_t qeth_l3_dev_vipa_del_store(const char *buf, size_t count, + struct qeth_card *card, enum qeth_prot_versions proto) +{ + u8 addr[16]; + int rc; + + rc = qeth_l3_parse_vipae(buf, proto, addr); + if (rc) + return rc; + + qeth_l3_del_vipa(card, proto, addr); + + return count; +} + +static ssize_t qeth_l3_dev_vipa_del4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_vipa_del_store(buf, count, card, QETH_PROT_IPV4); +} + +static QETH_DEVICE_ATTR(vipa_del4, del4, 0200, NULL, + qeth_l3_dev_vipa_del4_store); + +static ssize_t qeth_l3_dev_vipa_add6_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_vipa_add_show(buf, card, QETH_PROT_IPV6); +} + +static ssize_t qeth_l3_dev_vipa_add6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_vipa_add_store(buf, count, card, QETH_PROT_IPV6); +} + +static QETH_DEVICE_ATTR(vipa_add6, add6, 0644, + qeth_l3_dev_vipa_add6_show, + qeth_l3_dev_vipa_add6_store); + +static ssize_t qeth_l3_dev_vipa_del6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_vipa_del_store(buf, count, card, QETH_PROT_IPV6); +} + +static QETH_DEVICE_ATTR(vipa_del6, del6, 0200, NULL, + qeth_l3_dev_vipa_del6_store); + +static struct attribute *qeth_vipa_device_attrs[] = { + &dev_attr_vipa_add4.attr, + &dev_attr_vipa_del4.attr, + &dev_attr_vipa_add6.attr, + &dev_attr_vipa_del6.attr, + NULL, +}; + +static struct attribute_group qeth_device_vipa_group = { + .name = "vipa", + .attrs = qeth_vipa_device_attrs, +}; + +static ssize_t qeth_l3_dev_rxip_add_show(char *buf, struct qeth_card *card, + enum qeth_prot_versions proto) +{ + struct qeth_ipaddr *ipaddr; + char addr_str[40]; + int entry_len; /* length of 1 entry string, differs between v4 and v6 */ + unsigned long flags; + int i = 0; + + entry_len = (proto == QETH_PROT_IPV4)? 12 : 40; + entry_len += 2; /* \n + terminator */ + spin_lock_irqsave(&card->ip_lock, flags); + list_for_each_entry(ipaddr, &card->ip_list, entry) { + if (ipaddr->proto != proto) + continue; + if (ipaddr->type != QETH_IP_TYPE_RXIP) + continue; + /* String must not be longer than PAGE_SIZE. So we check if + * string length gets near PAGE_SIZE. Then we can savely display + * the next IPv6 address (worst case, compared to IPv4) */ + if ((PAGE_SIZE - i) <= entry_len) + break; + qeth_l3_ipaddr_to_string(proto, (const u8 *)&ipaddr->u, + addr_str); + i += snprintf(buf + i, PAGE_SIZE - i, "%s\n", addr_str); + } + spin_unlock_irqrestore(&card->ip_lock, flags); + i += snprintf(buf + i, PAGE_SIZE - i, "\n"); + + return i; +} + +static ssize_t qeth_l3_dev_rxip_add4_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_rxip_add_show(buf, card, QETH_PROT_IPV4); +} + +static int qeth_l3_parse_rxipe(const char *buf, enum qeth_prot_versions proto, + u8 *addr) +{ + if (qeth_l3_string_to_ipaddr(buf, proto, addr)) { + PRINT_WARN("Invalid IP address format!\n"); + return -EINVAL; + } + return 0; +} + +static ssize_t qeth_l3_dev_rxip_add_store(const char *buf, size_t count, + struct qeth_card *card, enum qeth_prot_versions proto) +{ + u8 addr[16] = {0, }; + int rc; + + rc = qeth_l3_parse_rxipe(buf, proto, addr); + if (rc) + return rc; + + rc = qeth_l3_add_rxip(card, proto, addr); + if (rc) + return rc; + + return count; +} + +static ssize_t qeth_l3_dev_rxip_add4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_rxip_add_store(buf, count, card, QETH_PROT_IPV4); +} + +static QETH_DEVICE_ATTR(rxip_add4, add4, 0644, + qeth_l3_dev_rxip_add4_show, + qeth_l3_dev_rxip_add4_store); + +static ssize_t qeth_l3_dev_rxip_del_store(const char *buf, size_t count, + struct qeth_card *card, enum qeth_prot_versions proto) +{ + u8 addr[16]; + int rc; + + rc = qeth_l3_parse_rxipe(buf, proto, addr); + if (rc) + return rc; + + qeth_l3_del_rxip(card, proto, addr); + + return count; +} + +static ssize_t qeth_l3_dev_rxip_del4_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_rxip_del_store(buf, count, card, QETH_PROT_IPV4); +} + +static QETH_DEVICE_ATTR(rxip_del4, del4, 0200, NULL, + qeth_l3_dev_rxip_del4_store); + +static ssize_t qeth_l3_dev_rxip_add6_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_rxip_add_show(buf, card, QETH_PROT_IPV6); +} + +static ssize_t qeth_l3_dev_rxip_add6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_rxip_add_store(buf, count, card, QETH_PROT_IPV6); +} + +static QETH_DEVICE_ATTR(rxip_add6, add6, 0644, + qeth_l3_dev_rxip_add6_show, + qeth_l3_dev_rxip_add6_store); + +static ssize_t qeth_l3_dev_rxip_del6_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct qeth_card *card = dev_get_drvdata(dev); + + if (!card) + return -EINVAL; + + return qeth_l3_dev_rxip_del_store(buf, count, card, QETH_PROT_IPV6); +} + +static QETH_DEVICE_ATTR(rxip_del6, del6, 0200, NULL, + qeth_l3_dev_rxip_del6_store); + +static struct attribute *qeth_rxip_device_attrs[] = { + &dev_attr_rxip_add4.attr, + &dev_attr_rxip_del4.attr, + &dev_attr_rxip_add6.attr, + &dev_attr_rxip_del6.attr, + NULL, +}; + +static struct attribute_group qeth_device_rxip_group = { + .name = "rxip", + .attrs = qeth_rxip_device_attrs, +}; + +int qeth_l3_create_device_attributes(struct device *dev) +{ + int ret; + + ret = sysfs_create_group(&dev->kobj, &qeth_l3_device_attr_group); + if (ret) + return ret; + + ret = sysfs_create_group(&dev->kobj, &qeth_device_ipato_group); + if (ret) { + sysfs_remove_group(&dev->kobj, &qeth_l3_device_attr_group); + return ret; + } + + ret = sysfs_create_group(&dev->kobj, &qeth_device_vipa_group); + if (ret) { + sysfs_remove_group(&dev->kobj, &qeth_l3_device_attr_group); + sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); + return ret; + } + + ret = sysfs_create_group(&dev->kobj, &qeth_device_rxip_group); + if (ret) { + sysfs_remove_group(&dev->kobj, &qeth_l3_device_attr_group); + sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); + sysfs_remove_group(&dev->kobj, &qeth_device_vipa_group); + return ret; + } + return 0; +} + +void qeth_l3_remove_device_attributes(struct device *dev) +{ + sysfs_remove_group(&dev->kobj, &qeth_l3_device_attr_group); + sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); + sysfs_remove_group(&dev->kobj, &qeth_device_vipa_group); + sysfs_remove_group(&dev->kobj, &qeth_device_rxip_group); +} -- cgit v1.2.3-59-g8ed1b From 19a3da6c6e1e74ecac129a079139aaebb63fe6c8 Mon Sep 17 00:00:00 2001 From: Frank Blaschka Date: Fri, 15 Feb 2008 09:19:43 +0100 Subject: qeth: remove old qeth files Remove all obsolete qeth files. Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth.h | 1253 ------ drivers/s390/net/qeth_eddp.c | 634 --- drivers/s390/net/qeth_eddp.h | 84 - drivers/s390/net/qeth_fs.h | 168 - drivers/s390/net/qeth_main.c | 8956 ------------------------------------------ drivers/s390/net/qeth_mpc.c | 269 -- drivers/s390/net/qeth_mpc.h | 583 --- drivers/s390/net/qeth_proc.c | 316 -- drivers/s390/net/qeth_sys.c | 1858 --------- drivers/s390/net/qeth_tso.h | 148 - 10 files changed, 14269 deletions(-) delete mode 100644 drivers/s390/net/qeth.h delete mode 100644 drivers/s390/net/qeth_eddp.c delete mode 100644 drivers/s390/net/qeth_eddp.h delete mode 100644 drivers/s390/net/qeth_fs.h delete mode 100644 drivers/s390/net/qeth_main.c delete mode 100644 drivers/s390/net/qeth_mpc.c delete mode 100644 drivers/s390/net/qeth_mpc.h delete mode 100644 drivers/s390/net/qeth_proc.c delete mode 100644 drivers/s390/net/qeth_sys.c delete mode 100644 drivers/s390/net/qeth_tso.h (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth.h b/drivers/s390/net/qeth.h deleted file mode 100644 index 8c6b72d05b1d..000000000000 --- a/drivers/s390/net/qeth.h +++ /dev/null @@ -1,1253 +0,0 @@ -#ifndef __QETH_H__ -#define __QETH_H__ - -#include -#include - -#include -#include -#include -#include -#include - -#include -#include -#include -#include - - -#include - -#include -#include -#include -#include - -#include "qeth_mpc.h" - -#ifdef CONFIG_QETH_IPV6 -#define QETH_VERSION_IPV6 ":IPv6" -#else -#define QETH_VERSION_IPV6 "" -#endif -#ifdef CONFIG_QETH_VLAN -#define QETH_VERSION_VLAN ":VLAN" -#else -#define QETH_VERSION_VLAN "" -#endif - -/** - * Debug Facility stuff - */ -#define QETH_DBF_SETUP_NAME "qeth_setup" -#define QETH_DBF_SETUP_LEN 8 -#define QETH_DBF_SETUP_PAGES 8 -#define QETH_DBF_SETUP_NR_AREAS 1 -#define QETH_DBF_SETUP_LEVEL 5 - -#define QETH_DBF_MISC_NAME "qeth_misc" -#define QETH_DBF_MISC_LEN 128 -#define QETH_DBF_MISC_PAGES 2 -#define QETH_DBF_MISC_NR_AREAS 1 -#define QETH_DBF_MISC_LEVEL 2 - -#define QETH_DBF_DATA_NAME "qeth_data" -#define QETH_DBF_DATA_LEN 96 -#define QETH_DBF_DATA_PAGES 8 -#define QETH_DBF_DATA_NR_AREAS 1 -#define QETH_DBF_DATA_LEVEL 2 - -#define QETH_DBF_CONTROL_NAME "qeth_control" -#define QETH_DBF_CONTROL_LEN 256 -#define QETH_DBF_CONTROL_PAGES 8 -#define QETH_DBF_CONTROL_NR_AREAS 2 -#define QETH_DBF_CONTROL_LEVEL 5 - -#define QETH_DBF_TRACE_NAME "qeth_trace" -#define QETH_DBF_TRACE_LEN 8 -#define QETH_DBF_TRACE_PAGES 4 -#define QETH_DBF_TRACE_NR_AREAS 2 -#define QETH_DBF_TRACE_LEVEL 3 -extern debug_info_t *qeth_dbf_trace; - -#define QETH_DBF_SENSE_NAME "qeth_sense" -#define QETH_DBF_SENSE_LEN 64 -#define QETH_DBF_SENSE_PAGES 2 -#define QETH_DBF_SENSE_NR_AREAS 1 -#define QETH_DBF_SENSE_LEVEL 2 - -#define QETH_DBF_QERR_NAME "qeth_qerr" -#define QETH_DBF_QERR_LEN 8 -#define QETH_DBF_QERR_PAGES 2 -#define QETH_DBF_QERR_NR_AREAS 2 -#define QETH_DBF_QERR_LEVEL 2 - -#define QETH_DBF_TEXT(name,level,text) \ - do { \ - debug_text_event(qeth_dbf_##name,level,text); \ - } while (0) - -#define QETH_DBF_HEX(name,level,addr,len) \ - do { \ - debug_event(qeth_dbf_##name,level,(void*)(addr),len); \ - } while (0) - -DECLARE_PER_CPU(char[256], qeth_dbf_txt_buf); - -#define QETH_DBF_TEXT_(name,level,text...) \ - do { \ - char* dbf_txt_buf = get_cpu_var(qeth_dbf_txt_buf); \ - sprintf(dbf_txt_buf, text); \ - debug_text_event(qeth_dbf_##name,level,dbf_txt_buf); \ - put_cpu_var(qeth_dbf_txt_buf); \ - } while (0) - -#define QETH_DBF_SPRINTF(name,level,text...) \ - do { \ - debug_sprintf_event(qeth_dbf_trace, level, ##text ); \ - debug_sprintf_event(qeth_dbf_trace, level, text ); \ - } while (0) - -/** - * some more debug stuff - */ -#define PRINTK_HEADER "qeth: " - -#define HEXDUMP16(importance,header,ptr) \ -PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \ - "%02x %02x %02x %02x %02x %02x %02x %02x\n", \ - *(((char*)ptr)),*(((char*)ptr)+1),*(((char*)ptr)+2), \ - *(((char*)ptr)+3),*(((char*)ptr)+4),*(((char*)ptr)+5), \ - *(((char*)ptr)+6),*(((char*)ptr)+7),*(((char*)ptr)+8), \ - *(((char*)ptr)+9),*(((char*)ptr)+10),*(((char*)ptr)+11), \ - *(((char*)ptr)+12),*(((char*)ptr)+13), \ - *(((char*)ptr)+14),*(((char*)ptr)+15)); \ -PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \ - "%02x %02x %02x %02x %02x %02x %02x %02x\n", \ - *(((char*)ptr)+16),*(((char*)ptr)+17), \ - *(((char*)ptr)+18),*(((char*)ptr)+19), \ - *(((char*)ptr)+20),*(((char*)ptr)+21), \ - *(((char*)ptr)+22),*(((char*)ptr)+23), \ - *(((char*)ptr)+24),*(((char*)ptr)+25), \ - *(((char*)ptr)+26),*(((char*)ptr)+27), \ - *(((char*)ptr)+28),*(((char*)ptr)+29), \ - *(((char*)ptr)+30),*(((char*)ptr)+31)); - -static inline void -qeth_hex_dump(unsigned char *buf, size_t len) -{ - size_t i; - - for (i = 0; i < len; i++) { - if (i && !(i % 16)) - printk("\n"); - printk("%02x ", *(buf + i)); - } - printk("\n"); -} - -#define SENSE_COMMAND_REJECT_BYTE 0 -#define SENSE_COMMAND_REJECT_FLAG 0x80 -#define SENSE_RESETTING_EVENT_BYTE 1 -#define SENSE_RESETTING_EVENT_FLAG 0x80 - -/* - * Common IO related definitions - */ -extern struct device *qeth_root_dev; -extern struct ccw_driver qeth_ccw_driver; -extern struct ccwgroup_driver qeth_ccwgroup_driver; - -#define CARD_RDEV(card) card->read.ccwdev -#define CARD_WDEV(card) card->write.ccwdev -#define CARD_DDEV(card) card->data.ccwdev -#define CARD_BUS_ID(card) card->gdev->dev.bus_id -#define CARD_RDEV_ID(card) card->read.ccwdev->dev.bus_id -#define CARD_WDEV_ID(card) card->write.ccwdev->dev.bus_id -#define CARD_DDEV_ID(card) card->data.ccwdev->dev.bus_id -#define CHANNEL_ID(channel) channel->ccwdev->dev.bus_id - -#define CARD_FROM_CDEV(cdev) (struct qeth_card *) \ - ((struct ccwgroup_device *)cdev->dev.driver_data)\ - ->dev.driver_data; - -/** - * card stuff - */ -struct qeth_perf_stats { - unsigned int bufs_rec; - unsigned int bufs_sent; - - unsigned int skbs_sent_pack; - unsigned int bufs_sent_pack; - - unsigned int sc_dp_p; - unsigned int sc_p_dp; - /* qdio_input_handler: number of times called, time spent in */ - __u64 inbound_start_time; - unsigned int inbound_cnt; - unsigned int inbound_time; - /* qeth_send_packet: number of times called, time spent in */ - __u64 outbound_start_time; - unsigned int outbound_cnt; - unsigned int outbound_time; - /* qdio_output_handler: number of times called, time spent in */ - __u64 outbound_handler_start_time; - unsigned int outbound_handler_cnt; - unsigned int outbound_handler_time; - /* number of calls to and time spent in do_QDIO for inbound queue */ - __u64 inbound_do_qdio_start_time; - unsigned int inbound_do_qdio_cnt; - unsigned int inbound_do_qdio_time; - /* number of calls to and time spent in do_QDIO for outbound queues */ - __u64 outbound_do_qdio_start_time; - unsigned int outbound_do_qdio_cnt; - unsigned int outbound_do_qdio_time; - /* eddp data */ - unsigned int large_send_bytes; - unsigned int large_send_cnt; - unsigned int sg_skbs_sent; - unsigned int sg_frags_sent; - /* initial values when measuring starts */ - unsigned long initial_rx_packets; - unsigned long initial_tx_packets; - /* inbound scatter gather data */ - unsigned int sg_skbs_rx; - unsigned int sg_frags_rx; - unsigned int sg_alloc_page_rx; -}; - -/* Routing stuff */ -struct qeth_routing_info { - enum qeth_routing_types type; -}; - -/* IPA stuff */ -struct qeth_ipa_info { - __u32 supported_funcs; - __u32 enabled_funcs; -}; - -static inline int -qeth_is_ipa_supported(struct qeth_ipa_info *ipa, enum qeth_ipa_funcs func) -{ - return (ipa->supported_funcs & func); -} - -static inline int -qeth_is_ipa_enabled(struct qeth_ipa_info *ipa, enum qeth_ipa_funcs func) -{ - return (ipa->supported_funcs & ipa->enabled_funcs & func); -} - -#define qeth_adp_supported(c,f) \ - qeth_is_ipa_supported(&c->options.adp, f) -#define qeth_adp_enabled(c,f) \ - qeth_is_ipa_enabled(&c->options.adp, f) -#define qeth_is_supported(c,f) \ - qeth_is_ipa_supported(&c->options.ipa4, f) -#define qeth_is_enabled(c,f) \ - qeth_is_ipa_enabled(&c->options.ipa4, f) -#ifdef CONFIG_QETH_IPV6 -#define qeth_is_supported6(c,f) \ - qeth_is_ipa_supported(&c->options.ipa6, f) -#define qeth_is_enabled6(c,f) \ - qeth_is_ipa_enabled(&c->options.ipa6, f) -#else /* CONFIG_QETH_IPV6 */ -#define qeth_is_supported6(c,f) 0 -#define qeth_is_enabled6(c,f) 0 -#endif /* CONFIG_QETH_IPV6 */ -#define qeth_is_ipafunc_supported(c,prot,f) \ - (prot==QETH_PROT_IPV6)? qeth_is_supported6(c,f):qeth_is_supported(c,f) -#define qeth_is_ipafunc_enabled(c,prot,f) \ - (prot==QETH_PROT_IPV6)? qeth_is_enabled6(c,f):qeth_is_enabled(c,f) - - -#define QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT 0x0101 -#define QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT 0x0101 -#define QETH_IDX_FUNC_LEVEL_IQD_ENA_IPAT 0x4108 -#define QETH_IDX_FUNC_LEVEL_IQD_DIS_IPAT 0x5108 - -#define QETH_MODELLIST_ARRAY \ - {{0x1731,0x01,0x1732,0x01,QETH_CARD_TYPE_OSAE,1, \ - QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT, \ - QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT, \ - QETH_MAX_QUEUES,0}, \ - {0x1731,0x05,0x1732,0x05,QETH_CARD_TYPE_IQD,0, \ - QETH_IDX_FUNC_LEVEL_IQD_ENA_IPAT, \ - QETH_IDX_FUNC_LEVEL_IQD_DIS_IPAT, \ - QETH_MAX_QUEUES,0x103}, \ - {0x1731,0x06,0x1732,0x06,QETH_CARD_TYPE_OSN,0, \ - QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT, \ - QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT, \ - QETH_MAX_QUEUES,0}, \ - {0,0,0,0,0,0,0,0,0}} - -#define QETH_REAL_CARD 1 -#define QETH_VLAN_CARD 2 -#define QETH_BUFSIZE 4096 - -/** - * some more defs - */ -#define IF_NAME_LEN 16 -#define QETH_TX_TIMEOUT 100 * HZ -#define QETH_RCD_TIMEOUT 60 * HZ -#define QETH_HEADER_SIZE 32 -#define MAX_PORTNO 15 -#define QETH_FAKE_LL_LEN_ETH ETH_HLEN -#define QETH_FAKE_LL_LEN_TR (sizeof(struct trh_hdr)-TR_MAXRIFLEN+sizeof(struct trllc)) -#define QETH_FAKE_LL_V6_ADDR_POS 24 - -/*IPv6 address autoconfiguration stuff*/ -#define UNIQUE_ID_IF_CREATE_ADDR_FAILED 0xfffe -#define UNIQUE_ID_NOT_BY_CARD 0x10000 - -/*****************************************************************************/ -/* QDIO queue and buffer handling */ -/*****************************************************************************/ -#define QETH_MAX_QUEUES 4 -#define QETH_IN_BUF_SIZE_DEFAULT 65536 -#define QETH_IN_BUF_COUNT_DEFAULT 16 -#define QETH_IN_BUF_COUNT_MIN 8 -#define QETH_IN_BUF_COUNT_MAX 128 -#define QETH_MAX_BUFFER_ELEMENTS(card) ((card)->qdio.in_buf_size >> 12) -#define QETH_IN_BUF_REQUEUE_THRESHOLD(card) \ - ((card)->qdio.in_buf_pool.buf_count / 2) - -/* buffers we have to be behind before we get a PCI */ -#define QETH_PCI_THRESHOLD_A(card) ((card)->qdio.in_buf_pool.buf_count+1) -/*enqueued free buffers left before we get a PCI*/ -#define QETH_PCI_THRESHOLD_B(card) 0 -/*not used unless the microcode gets patched*/ -#define QETH_PCI_TIMER_VALUE(card) 3 - -#define QETH_MIN_INPUT_THRESHOLD 1 -#define QETH_MAX_INPUT_THRESHOLD 500 -#define QETH_MIN_OUTPUT_THRESHOLD 1 -#define QETH_MAX_OUTPUT_THRESHOLD 300 - -/* priority queing */ -#define QETH_PRIOQ_DEFAULT QETH_NO_PRIO_QUEUEING -#define QETH_DEFAULT_QUEUE 2 -#define QETH_NO_PRIO_QUEUEING 0 -#define QETH_PRIO_Q_ING_PREC 1 -#define QETH_PRIO_Q_ING_TOS 2 -#define IP_TOS_LOWDELAY 0x10 -#define IP_TOS_HIGHTHROUGHPUT 0x08 -#define IP_TOS_HIGHRELIABILITY 0x04 -#define IP_TOS_NOTIMPORTANT 0x02 - -/* Packing */ -#define QETH_LOW_WATERMARK_PACK 2 -#define QETH_HIGH_WATERMARK_PACK 5 -#define QETH_WATERMARK_PACK_FUZZ 1 - -#define QETH_IP_HEADER_SIZE 40 - -/* large receive scatter gather copy break */ -#define QETH_RX_SG_CB (PAGE_SIZE >> 1) - -struct qeth_hdr_layer3 { - __u8 id; - __u8 flags; - __u16 inbound_checksum; /*TSO:__u16 seqno */ - __u32 token; /*TSO: __u32 reserved */ - __u16 length; - __u8 vlan_prio; - __u8 ext_flags; - __u16 vlan_id; - __u16 frame_offset; - __u8 dest_addr[16]; -} __attribute__ ((packed)); - -struct qeth_hdr_layer2 { - __u8 id; - __u8 flags[3]; - __u8 port_no; - __u8 hdr_length; - __u16 pkt_length; - __u16 seq_no; - __u16 vlan_id; - __u32 reserved; - __u8 reserved2[16]; -} __attribute__ ((packed)); - -struct qeth_hdr_osn { - __u8 id; - __u8 reserved; - __u16 seq_no; - __u16 reserved2; - __u16 control_flags; - __u16 pdu_length; - __u8 reserved3[18]; - __u32 ccid; -} __attribute__ ((packed)); - -struct qeth_hdr { - union { - struct qeth_hdr_layer2 l2; - struct qeth_hdr_layer3 l3; - struct qeth_hdr_osn osn; - } hdr; -} __attribute__ ((packed)); - -/*TCP Segmentation Offload header*/ -struct qeth_hdr_ext_tso { - __u16 hdr_tot_len; - __u8 imb_hdr_no; - __u8 reserved; - __u8 hdr_type; - __u8 hdr_version; - __u16 hdr_len; - __u32 payload_len; - __u16 mss; - __u16 dg_hdr_len; - __u8 padding[16]; -} __attribute__ ((packed)); - -struct qeth_hdr_tso { - struct qeth_hdr hdr; /*hdr->hdr.l3.xxx*/ - struct qeth_hdr_ext_tso ext; -} __attribute__ ((packed)); - - -/* flags for qeth_hdr.flags */ -#define QETH_HDR_PASSTHRU 0x10 -#define QETH_HDR_IPV6 0x80 -#define QETH_HDR_CAST_MASK 0x07 -enum qeth_cast_flags { - QETH_CAST_UNICAST = 0x06, - QETH_CAST_MULTICAST = 0x04, - QETH_CAST_BROADCAST = 0x05, - QETH_CAST_ANYCAST = 0x07, - QETH_CAST_NOCAST = 0x00, -}; - -enum qeth_layer2_frame_flags { - QETH_LAYER2_FLAG_MULTICAST = 0x01, - QETH_LAYER2_FLAG_BROADCAST = 0x02, - QETH_LAYER2_FLAG_UNICAST = 0x04, - QETH_LAYER2_FLAG_VLAN = 0x10, -}; - -enum qeth_header_ids { - QETH_HEADER_TYPE_LAYER3 = 0x01, - QETH_HEADER_TYPE_LAYER2 = 0x02, - QETH_HEADER_TYPE_TSO = 0x03, - QETH_HEADER_TYPE_OSN = 0x04, -}; -/* flags for qeth_hdr.ext_flags */ -#define QETH_HDR_EXT_VLAN_FRAME 0x01 -#define QETH_HDR_EXT_TOKEN_ID 0x02 -#define QETH_HDR_EXT_INCLUDE_VLAN_TAG 0x04 -#define QETH_HDR_EXT_SRC_MAC_ADDR 0x08 -#define QETH_HDR_EXT_CSUM_HDR_REQ 0x10 -#define QETH_HDR_EXT_CSUM_TRANSP_REQ 0x20 -#define QETH_HDR_EXT_UDP_TSO 0x40 /*bit off for TCP*/ - -static inline int -qeth_is_last_sbale(struct qdio_buffer_element *sbale) -{ - return (sbale->flags & SBAL_FLAGS_LAST_ENTRY); -} - -enum qeth_qdio_buffer_states { - /* - * inbound: read out by driver; owned by hardware in order to be filled - * outbound: owned by driver in order to be filled - */ - QETH_QDIO_BUF_EMPTY, - /* - * inbound: filled by hardware; owned by driver in order to be read out - * outbound: filled by driver; owned by hardware in order to be sent - */ - QETH_QDIO_BUF_PRIMED, -}; - -enum qeth_qdio_info_states { - QETH_QDIO_UNINITIALIZED, - QETH_QDIO_ALLOCATED, - QETH_QDIO_ESTABLISHED, - QETH_QDIO_CLEANING -}; - -struct qeth_buffer_pool_entry { - struct list_head list; - struct list_head init_list; - void *elements[QDIO_MAX_ELEMENTS_PER_BUFFER]; -}; - -struct qeth_qdio_buffer_pool { - struct list_head entry_list; - int buf_count; -}; - -struct qeth_qdio_buffer { - struct qdio_buffer *buffer; - volatile enum qeth_qdio_buffer_states state; - /* the buffer pool entry currently associated to this buffer */ - struct qeth_buffer_pool_entry *pool_entry; -}; - -struct qeth_qdio_q { - struct qdio_buffer qdio_bufs[QDIO_MAX_BUFFERS_PER_Q]; - struct qeth_qdio_buffer bufs[QDIO_MAX_BUFFERS_PER_Q]; - /* - * buf_to_init means "buffer must be initialized by driver and must - * be made available for hardware" -> state is set to EMPTY - */ - volatile int next_buf_to_init; -} __attribute__ ((aligned(256))); - -/* possible types of qeth large_send support */ -enum qeth_large_send_types { - QETH_LARGE_SEND_NO, - QETH_LARGE_SEND_EDDP, - QETH_LARGE_SEND_TSO, -}; - -struct qeth_qdio_out_buffer { - struct qdio_buffer *buffer; - atomic_t state; - volatile int next_element_to_fill; - struct sk_buff_head skb_list; - struct list_head ctx_list; -}; - -struct qeth_card; - -enum qeth_out_q_states { - QETH_OUT_Q_UNLOCKED, - QETH_OUT_Q_LOCKED, - QETH_OUT_Q_LOCKED_FLUSH, -}; - -struct qeth_qdio_out_q { - struct qdio_buffer qdio_bufs[QDIO_MAX_BUFFERS_PER_Q]; - struct qeth_qdio_out_buffer bufs[QDIO_MAX_BUFFERS_PER_Q]; - int queue_no; - struct qeth_card *card; - atomic_t state; - volatile int do_pack; - /* - * index of buffer to be filled by driver; state EMPTY or PACKING - */ - volatile int next_buf_to_fill; - /* - * number of buffers that are currently filled (PRIMED) - * -> these buffers are hardware-owned - */ - atomic_t used_buffers; - /* indicates whether PCI flag must be set (or if one is outstanding) */ - atomic_t set_pci_flags_count; -} __attribute__ ((aligned(256))); - -struct qeth_qdio_info { - atomic_t state; - /* input */ - struct qeth_qdio_q *in_q; - struct qeth_qdio_buffer_pool in_buf_pool; - struct qeth_qdio_buffer_pool init_pool; - int in_buf_size; - - /* output */ - int no_out_queues; - struct qeth_qdio_out_q **out_qs; - - /* priority queueing */ - int do_prio_queueing; - int default_out_queue; -}; - -enum qeth_send_errors { - QETH_SEND_ERROR_NONE, - QETH_SEND_ERROR_LINK_FAILURE, - QETH_SEND_ERROR_RETRY, - QETH_SEND_ERROR_KICK_IT, -}; - -#define QETH_ETH_MAC_V4 0x0100 /* like v4 */ -#define QETH_ETH_MAC_V6 0x3333 /* like v6 */ -/* tr mc mac is longer, but that will be enough to detect mc frames */ -#define QETH_TR_MAC_NC 0xc000 /* non-canonical */ -#define QETH_TR_MAC_C 0x0300 /* canonical */ - -#define DEFAULT_ADD_HHLEN 0 -#define MAX_ADD_HHLEN 1024 - -/** - * buffer stuff for read channel - */ -#define QETH_CMD_BUFFER_NO 8 - -/** - * channel state machine - */ -enum qeth_channel_states { - CH_STATE_UP, - CH_STATE_DOWN, - CH_STATE_ACTIVATING, - CH_STATE_HALTED, - CH_STATE_STOPPED, - CH_STATE_RCD, - CH_STATE_RCD_DONE, -}; -/** - * card state machine - */ -enum qeth_card_states { - CARD_STATE_DOWN, - CARD_STATE_HARDSETUP, - CARD_STATE_SOFTSETUP, - CARD_STATE_UP, - CARD_STATE_RECOVER, -}; - -/** - * Protocol versions - */ -enum qeth_prot_versions { - QETH_PROT_IPV4 = 0x0004, - QETH_PROT_IPV6 = 0x0006, -}; - -enum qeth_ip_types { - QETH_IP_TYPE_NORMAL, - QETH_IP_TYPE_VIPA, - QETH_IP_TYPE_RXIP, - QETH_IP_TYPE_DEL_ALL_MC, -}; - -enum qeth_cmd_buffer_state { - BUF_STATE_FREE, - BUF_STATE_LOCKED, - BUF_STATE_PROCESSED, -}; -/** - * IP address and multicast list - */ -struct qeth_ipaddr { - struct list_head entry; - enum qeth_ip_types type; - enum qeth_ipa_setdelip_flags set_flags; - enum qeth_ipa_setdelip_flags del_flags; - int is_multicast; - volatile int users; - enum qeth_prot_versions proto; - unsigned char mac[OSA_ADDR_LEN]; - union { - struct { - unsigned int addr; - unsigned int mask; - } a4; - struct { - struct in6_addr addr; - unsigned int pfxlen; - } a6; - } u; -}; - -struct qeth_ipato_entry { - struct list_head entry; - enum qeth_prot_versions proto; - char addr[16]; - int mask_bits; -}; - -struct qeth_ipato { - int enabled; - int invert4; - int invert6; - struct list_head entries; -}; - -struct qeth_channel; - -struct qeth_cmd_buffer { - enum qeth_cmd_buffer_state state; - struct qeth_channel *channel; - unsigned char *data; - int rc; - void (*callback) (struct qeth_channel *, struct qeth_cmd_buffer *); -}; - - -/** - * definition of a qeth channel, used for read and write - */ -struct qeth_channel { - enum qeth_channel_states state; - struct ccw1 ccw; - spinlock_t iob_lock; - wait_queue_head_t wait_q; - struct tasklet_struct irq_tasklet; - struct ccw_device *ccwdev; -/*command buffer for control data*/ - struct qeth_cmd_buffer iob[QETH_CMD_BUFFER_NO]; - atomic_t irq_pending; - volatile int io_buf_no; - volatile int buf_no; -}; - -/** - * OSA card related definitions - */ -struct qeth_token { - __u32 issuer_rm_w; - __u32 issuer_rm_r; - __u32 cm_filter_w; - __u32 cm_filter_r; - __u32 cm_connection_w; - __u32 cm_connection_r; - __u32 ulp_filter_w; - __u32 ulp_filter_r; - __u32 ulp_connection_w; - __u32 ulp_connection_r; -}; - -struct qeth_seqno { - __u32 trans_hdr; - __u32 pdu_hdr; - __u32 pdu_hdr_ack; - __u16 ipa; - __u32 pkt_seqno; -}; - -struct qeth_reply { - struct list_head list; - wait_queue_head_t wait_q; - int (*callback)(struct qeth_card *,struct qeth_reply *,unsigned long); - u32 seqno; - unsigned long offset; - atomic_t received; - int rc; - void *param; - struct qeth_card *card; - atomic_t refcnt; -}; - - -struct qeth_card_blkt { - int time_total; - int inter_packet; - int inter_packet_jumbo; -}; - -#define QETH_BROADCAST_WITH_ECHO 0x01 -#define QETH_BROADCAST_WITHOUT_ECHO 0x02 -#define QETH_LAYER2_MAC_READ 0x01 -#define QETH_LAYER2_MAC_REGISTERED 0x02 -struct qeth_card_info { - unsigned short unit_addr2; - unsigned short cula; - unsigned short chpid; - __u16 func_level; - char mcl_level[QETH_MCL_LENGTH + 1]; - int guestlan; - int mac_bits; - int portname_required; - int portno; - char portname[9]; - enum qeth_card_types type; - enum qeth_link_types link_type; - int is_multicast_different; - int initial_mtu; - int max_mtu; - int broadcast_capable; - int unique_id; - struct qeth_card_blkt blkt; - __u32 csum_mask; - enum qeth_ipa_promisc_modes promisc_mode; -}; - -struct qeth_card_options { - struct qeth_routing_info route4; - struct qeth_ipa_info ipa4; - struct qeth_ipa_info adp; /*Adapter parameters*/ -#ifdef CONFIG_QETH_IPV6 - struct qeth_routing_info route6; - struct qeth_ipa_info ipa6; -#endif /* QETH_IPV6 */ - enum qeth_checksum_types checksum_type; - int broadcast_mode; - int macaddr_mode; - int fake_broadcast; - int add_hhlen; - int fake_ll; - int layer2; - enum qeth_large_send_types large_send; - int performance_stats; - int rx_sg_cb; -}; - -/* - * thread bits for qeth_card thread masks - */ -enum qeth_threads { - QETH_SET_IP_THREAD = 1, - QETH_RECOVER_THREAD = 2, - QETH_SET_PROMISC_MODE_THREAD = 4, -}; - -struct qeth_osn_info { - int (*assist_cb)(struct net_device *dev, void *data); - int (*data_cb)(struct sk_buff *skb); -}; - -struct qeth_card { - struct list_head list; - enum qeth_card_states state; - int lan_online; - spinlock_t lock; -/*hardware and sysfs stuff*/ - struct ccwgroup_device *gdev; - struct qeth_channel read; - struct qeth_channel write; - struct qeth_channel data; - - struct net_device *dev; - struct net_device_stats stats; - - struct qeth_card_info info; - struct qeth_token token; - struct qeth_seqno seqno; - struct qeth_card_options options; - - wait_queue_head_t wait_q; -#ifdef CONFIG_QETH_VLAN - spinlock_t vlanlock; - struct vlan_group *vlangrp; -#endif - struct work_struct kernel_thread_starter; - spinlock_t thread_mask_lock; - volatile unsigned long thread_start_mask; - volatile unsigned long thread_allowed_mask; - volatile unsigned long thread_running_mask; - spinlock_t ip_lock; - struct list_head ip_list; - struct list_head *ip_tbd_list; - struct qeth_ipato ipato; - struct list_head cmd_waiter_list; - /* QDIO buffer handling */ - struct qeth_qdio_info qdio; - struct qeth_perf_stats perf_stats; - int use_hard_stop; - const struct header_ops *orig_header_ops; - struct qeth_osn_info osn_info; - atomic_t force_alloc_skb; -}; - -struct qeth_card_list_struct { - struct list_head list; - rwlock_t rwlock; -}; - -extern struct qeth_card_list_struct qeth_card_list; - -/*notifier list */ -struct qeth_notify_list_struct { - struct list_head list; - struct task_struct *task; - int signum; -}; -extern spinlock_t qeth_notify_lock; -extern struct list_head qeth_notify_list; - -/*some helper functions*/ - -#define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "") - -static inline __u8 -qeth_get_ipa_adp_type(enum qeth_link_types link_type) -{ - switch (link_type) { - case QETH_LINK_TYPE_HSTR: - return 2; - default: - return 1; - } -} - -static inline struct sk_buff * -qeth_realloc_headroom(struct qeth_card *card, struct sk_buff *skb, int size) -{ - struct sk_buff *new_skb = skb; - - if (skb_headroom(skb) >= size) - return skb; - new_skb = skb_realloc_headroom(skb, size); - if (!new_skb) - PRINT_ERR("Could not realloc headroom for qeth_hdr " - "on interface %s", QETH_CARD_IFNAME(card)); - return new_skb; -} - -static inline struct sk_buff * -qeth_pskb_unshare(struct sk_buff *skb, gfp_t pri) -{ - struct sk_buff *nskb; - if (!skb_cloned(skb)) - return skb; - nskb = skb_copy(skb, pri); - return nskb; -} - -static inline void * -qeth_push_skb(struct qeth_card *card, struct sk_buff *skb, int size) -{ - void *hdr; - - hdr = (void *) skb_push(skb, size); - /* - * sanity check, the Linux memory allocation scheme should - * never present us cases like this one (the qdio header size plus - * the first 40 bytes of the paket cross a 4k boundary) - */ - if ((((unsigned long) hdr) & (~(PAGE_SIZE - 1))) != - (((unsigned long) hdr + size + - QETH_IP_HEADER_SIZE) & (~(PAGE_SIZE - 1)))) { - PRINT_ERR("Misaligned packet on interface %s. Discarded.", - QETH_CARD_IFNAME(card)); - return NULL; - } - return hdr; -} - - -static inline int -qeth_get_hlen(__u8 link_type) -{ -#ifdef CONFIG_QETH_IPV6 - switch (link_type) { - case QETH_LINK_TYPE_HSTR: - case QETH_LINK_TYPE_LANE_TR: - return sizeof(struct qeth_hdr_tso) + TR_HLEN; - default: -#ifdef CONFIG_QETH_VLAN - return sizeof(struct qeth_hdr_tso) + VLAN_ETH_HLEN; -#else - return sizeof(struct qeth_hdr_tso) + ETH_HLEN; -#endif - } -#else /* CONFIG_QETH_IPV6 */ -#ifdef CONFIG_QETH_VLAN - return sizeof(struct qeth_hdr_tso) + VLAN_HLEN; -#else - return sizeof(struct qeth_hdr_tso); -#endif -#endif /* CONFIG_QETH_IPV6 */ -} - -static inline unsigned short -qeth_get_netdev_flags(struct qeth_card *card) -{ - if (card->options.layer2 && - (card->info.type == QETH_CARD_TYPE_OSAE)) - return 0; - switch (card->info.type) { - case QETH_CARD_TYPE_IQD: - case QETH_CARD_TYPE_OSN: - return IFF_NOARP; -#ifdef CONFIG_QETH_IPV6 - default: - return 0; -#else - default: - return IFF_NOARP; -#endif - } -} - -static inline int -qeth_get_initial_mtu_for_card(struct qeth_card * card) -{ - switch (card->info.type) { - case QETH_CARD_TYPE_UNKNOWN: - return 1500; - case QETH_CARD_TYPE_IQD: - return card->info.max_mtu; - case QETH_CARD_TYPE_OSAE: - switch (card->info.link_type) { - case QETH_LINK_TYPE_HSTR: - case QETH_LINK_TYPE_LANE_TR: - return 2000; - default: - return 1492; - } - default: - return 1500; - } -} - -static inline int -qeth_get_max_mtu_for_card(int cardtype) -{ - switch (cardtype) { - - case QETH_CARD_TYPE_UNKNOWN: - case QETH_CARD_TYPE_OSAE: - case QETH_CARD_TYPE_OSN: - return 61440; - case QETH_CARD_TYPE_IQD: - return 57344; - default: - return 1500; - } -} - -static inline int -qeth_get_mtu_out_of_mpc(int cardtype) -{ - switch (cardtype) { - case QETH_CARD_TYPE_IQD: - return 1; - default: - return 0; - } -} - -static inline int -qeth_get_mtu_outof_framesize(int framesize) -{ - switch (framesize) { - case 0x4000: - return 8192; - case 0x6000: - return 16384; - case 0xa000: - return 32768; - case 0xffff: - return 57344; - default: - return 0; - } -} - -static inline int -qeth_mtu_is_valid(struct qeth_card * card, int mtu) -{ - switch (card->info.type) { - case QETH_CARD_TYPE_OSAE: - return ((mtu >= 576) && (mtu <= 61440)); - case QETH_CARD_TYPE_IQD: - return ((mtu >= 576) && - (mtu <= card->info.max_mtu + 4096 - 32)); - case QETH_CARD_TYPE_OSN: - case QETH_CARD_TYPE_UNKNOWN: - default: - return 1; - } -} - -static inline int -qeth_get_arphdr_type(int cardtype, int linktype) -{ - switch (cardtype) { - case QETH_CARD_TYPE_OSAE: - case QETH_CARD_TYPE_OSN: - switch (linktype) { - case QETH_LINK_TYPE_LANE_TR: - case QETH_LINK_TYPE_HSTR: - return ARPHRD_IEEE802_TR; - default: - return ARPHRD_ETHER; - } - case QETH_CARD_TYPE_IQD: - default: - return ARPHRD_ETHER; - } -} - -static inline int -qeth_get_micros(void) -{ - return (int) (get_clock() >> 12); -} - -static inline int -qeth_get_qdio_q_format(struct qeth_card *card) -{ - switch (card->info.type) { - case QETH_CARD_TYPE_IQD: - return 2; - default: - return 0; - } -} - -static inline int -qeth_isxdigit(char * buf) -{ - while (*buf) { - if (!isxdigit(*buf++)) - return 0; - } - return 1; -} - -static inline void -qeth_ipaddr4_to_string(const __u8 *addr, char *buf) -{ - sprintf(buf, "%i.%i.%i.%i", addr[0], addr[1], addr[2], addr[3]); -} - -static inline int -qeth_string_to_ipaddr4(const char *buf, __u8 *addr) -{ - int count = 0, rc = 0; - int in[4]; - char c; - - rc = sscanf(buf, "%u.%u.%u.%u%c", - &in[0], &in[1], &in[2], &in[3], &c); - if (rc != 4 && (rc != 5 || c != '\n')) - return -EINVAL; - for (count = 0; count < 4; count++) { - if (in[count] > 255) - return -EINVAL; - addr[count] = in[count]; - } - return 0; -} - -static inline void -qeth_ipaddr6_to_string(const __u8 *addr, char *buf) -{ - sprintf(buf, "%02x%02x:%02x%02x:%02x%02x:%02x%02x" - ":%02x%02x:%02x%02x:%02x%02x:%02x%02x", - addr[0], addr[1], addr[2], addr[3], - addr[4], addr[5], addr[6], addr[7], - addr[8], addr[9], addr[10], addr[11], - addr[12], addr[13], addr[14], addr[15]); -} - -static inline int -qeth_string_to_ipaddr6(const char *buf, __u8 *addr) -{ - const char *end, *end_tmp, *start; - __u16 *in; - char num[5]; - int num2, cnt, out, found, save_cnt; - unsigned short in_tmp[8] = {0, }; - - cnt = out = found = save_cnt = num2 = 0; - end = start = buf; - in = (__u16 *) addr; - memset(in, 0, 16); - while (*end) { - end = strchr(start,':'); - if (end == NULL) { - end = buf + strlen(buf); - if ((end_tmp = strchr(start, '\n')) != NULL) - end = end_tmp; - out = 1; - } - if ((end - start)) { - memset(num, 0, 5); - if ((end - start) > 4) - return -EINVAL; - memcpy(num, start, end - start); - if (!qeth_isxdigit(num)) - return -EINVAL; - sscanf(start, "%x", &num2); - if (found) - in_tmp[save_cnt++] = num2; - else - in[cnt++] = num2; - if (out) - break; - } else { - if (found) - return -EINVAL; - found = 1; - } - start = ++end; - } - if (cnt + save_cnt > 8) - return -EINVAL; - cnt = 7; - while (save_cnt) - in[cnt--] = in_tmp[--save_cnt]; - return 0; -} - -static inline void -qeth_ipaddr_to_string(enum qeth_prot_versions proto, const __u8 *addr, - char *buf) -{ - if (proto == QETH_PROT_IPV4) - qeth_ipaddr4_to_string(addr, buf); - else if (proto == QETH_PROT_IPV6) - qeth_ipaddr6_to_string(addr, buf); -} - -static inline int -qeth_string_to_ipaddr(const char *buf, enum qeth_prot_versions proto, - __u8 *addr) -{ - if (proto == QETH_PROT_IPV4) - return qeth_string_to_ipaddr4(buf, addr); - else if (proto == QETH_PROT_IPV6) - return qeth_string_to_ipaddr6(buf, addr); - else - return -EINVAL; -} - -extern int -qeth_setrouting_v4(struct qeth_card *); -extern int -qeth_setrouting_v6(struct qeth_card *); - -extern int -qeth_add_ipato_entry(struct qeth_card *, struct qeth_ipato_entry *); - -extern void -qeth_del_ipato_entry(struct qeth_card *, enum qeth_prot_versions, u8 *, int); - -extern int -qeth_add_vipa(struct qeth_card *, enum qeth_prot_versions, const u8 *); - -extern void -qeth_del_vipa(struct qeth_card *, enum qeth_prot_versions, const u8 *); - -extern int -qeth_add_rxip(struct qeth_card *, enum qeth_prot_versions, const u8 *); - -extern void -qeth_del_rxip(struct qeth_card *, enum qeth_prot_versions, const u8 *); - -extern int -qeth_notifier_register(struct task_struct *, int ); - -extern int -qeth_notifier_unregister(struct task_struct * ); - -extern void -qeth_schedule_recovery(struct qeth_card *); - -extern int -qeth_realloc_buffer_pool(struct qeth_card *, int); - -extern int -qeth_set_large_send(struct qeth_card *, enum qeth_large_send_types); - -extern void -qeth_fill_header(struct qeth_card *, struct qeth_hdr *, - struct sk_buff *, int, int); -extern void -qeth_flush_buffers(struct qeth_qdio_out_q *, int, int, int); - -extern int -qeth_osn_assist(struct net_device *, void *, int); - -extern int -qeth_osn_register(unsigned char *read_dev_no, - struct net_device **, - int (*assist_cb)(struct net_device *, void *), - int (*data_cb)(struct sk_buff *)); - -extern void -qeth_osn_deregister(struct net_device *); - -#endif /* __QETH_H__ */ diff --git a/drivers/s390/net/qeth_eddp.c b/drivers/s390/net/qeth_eddp.c deleted file mode 100644 index e3c268cfbffe..000000000000 --- a/drivers/s390/net/qeth_eddp.c +++ /dev/null @@ -1,634 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_eddp.c - * - * Enhanced Device Driver Packing (EDDP) support for the qeth driver. - * - * Copyright 2004 IBM Corporation - * - * Author(s): Thomas Spatzier - * - */ -#include -#include -#include -#include -#include -#include -#include -#include - -#include - -#include "qeth.h" -#include "qeth_mpc.h" -#include "qeth_eddp.h" - -int -qeth_eddp_check_buffers_for_context(struct qeth_qdio_out_q *queue, - struct qeth_eddp_context *ctx) -{ - int index = queue->next_buf_to_fill; - int elements_needed = ctx->num_elements; - int elements_in_buffer; - int skbs_in_buffer; - int buffers_needed = 0; - - QETH_DBF_TEXT(trace, 5, "eddpcbfc"); - while(elements_needed > 0) { - buffers_needed++; - if (atomic_read(&queue->bufs[index].state) != - QETH_QDIO_BUF_EMPTY) - return -EBUSY; - - elements_in_buffer = QETH_MAX_BUFFER_ELEMENTS(queue->card) - - queue->bufs[index].next_element_to_fill; - skbs_in_buffer = elements_in_buffer / ctx->elements_per_skb; - elements_needed -= skbs_in_buffer * ctx->elements_per_skb; - index = (index + 1) % QDIO_MAX_BUFFERS_PER_Q; - } - return buffers_needed; -} - -static void -qeth_eddp_free_context(struct qeth_eddp_context *ctx) -{ - int i; - - QETH_DBF_TEXT(trace, 5, "eddpfctx"); - for (i = 0; i < ctx->num_pages; ++i) - free_page((unsigned long)ctx->pages[i]); - kfree(ctx->pages); - kfree(ctx->elements); - kfree(ctx); -} - - -static inline void -qeth_eddp_get_context(struct qeth_eddp_context *ctx) -{ - atomic_inc(&ctx->refcnt); -} - -void -qeth_eddp_put_context(struct qeth_eddp_context *ctx) -{ - if (atomic_dec_return(&ctx->refcnt) == 0) - qeth_eddp_free_context(ctx); -} - -void -qeth_eddp_buf_release_contexts(struct qeth_qdio_out_buffer *buf) -{ - struct qeth_eddp_context_reference *ref; - - QETH_DBF_TEXT(trace, 6, "eddprctx"); - while (!list_empty(&buf->ctx_list)){ - ref = list_entry(buf->ctx_list.next, - struct qeth_eddp_context_reference, list); - qeth_eddp_put_context(ref->ctx); - list_del(&ref->list); - kfree(ref); - } -} - -static int -qeth_eddp_buf_ref_context(struct qeth_qdio_out_buffer *buf, - struct qeth_eddp_context *ctx) -{ - struct qeth_eddp_context_reference *ref; - - QETH_DBF_TEXT(trace, 6, "eddprfcx"); - ref = kmalloc(sizeof(struct qeth_eddp_context_reference), GFP_ATOMIC); - if (ref == NULL) - return -ENOMEM; - qeth_eddp_get_context(ctx); - ref->ctx = ctx; - list_add_tail(&ref->list, &buf->ctx_list); - return 0; -} - -int -qeth_eddp_fill_buffer(struct qeth_qdio_out_q *queue, - struct qeth_eddp_context *ctx, - int index) -{ - struct qeth_qdio_out_buffer *buf = NULL; - struct qdio_buffer *buffer; - int elements = ctx->num_elements; - int element = 0; - int flush_cnt = 0; - int must_refcnt = 1; - int i; - - QETH_DBF_TEXT(trace, 5, "eddpfibu"); - while (elements > 0) { - buf = &queue->bufs[index]; - if (atomic_read(&buf->state) != QETH_QDIO_BUF_EMPTY){ - /* normally this should not happen since we checked for - * available elements in qeth_check_elements_for_context - */ - if (element == 0) - return -EBUSY; - else { - PRINT_WARN("could only partially fill eddp " - "buffer!\n"); - goto out; - } - } - /* check if the whole next skb fits into current buffer */ - if ((QETH_MAX_BUFFER_ELEMENTS(queue->card) - - buf->next_element_to_fill) - < ctx->elements_per_skb){ - /* no -> go to next buffer */ - atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); - index = (index + 1) % QDIO_MAX_BUFFERS_PER_Q; - flush_cnt++; - /* new buffer, so we have to add ctx to buffer'ctx_list - * and increment ctx's refcnt */ - must_refcnt = 1; - continue; - } - if (must_refcnt){ - must_refcnt = 0; - if (qeth_eddp_buf_ref_context(buf, ctx)){ - PRINT_WARN("no memory to create eddp context " - "reference\n"); - goto out_check; - } - } - buffer = buf->buffer; - /* fill one skb into buffer */ - for (i = 0; i < ctx->elements_per_skb; ++i){ - if (ctx->elements[element].length != 0) { - buffer->element[buf->next_element_to_fill]. - addr = ctx->elements[element].addr; - buffer->element[buf->next_element_to_fill]. - length = ctx->elements[element].length; - buffer->element[buf->next_element_to_fill]. - flags = ctx->elements[element].flags; - buf->next_element_to_fill++; - } - element++; - elements--; - } - } -out_check: - if (!queue->do_pack) { - QETH_DBF_TEXT(trace, 6, "fillbfnp"); - /* set state to PRIMED -> will be flushed */ - if (buf->next_element_to_fill > 0){ - atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); - flush_cnt++; - } - } else { - if (queue->card->options.performance_stats) - queue->card->perf_stats.skbs_sent_pack++; - QETH_DBF_TEXT(trace, 6, "fillbfpa"); - if (buf->next_element_to_fill >= - QETH_MAX_BUFFER_ELEMENTS(queue->card)) { - /* - * packed buffer if full -> set state PRIMED - * -> will be flushed - */ - atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); - flush_cnt++; - } - } -out: - return flush_cnt; -} - -static void -qeth_eddp_create_segment_hdrs(struct qeth_eddp_context *ctx, - struct qeth_eddp_data *eddp, int data_len) -{ - u8 *page; - int page_remainder; - int page_offset; - int pkt_len; - struct qeth_eddp_element *element; - - QETH_DBF_TEXT(trace, 5, "eddpcrsh"); - page = ctx->pages[ctx->offset >> PAGE_SHIFT]; - page_offset = ctx->offset % PAGE_SIZE; - element = &ctx->elements[ctx->num_elements]; - pkt_len = eddp->nhl + eddp->thl + data_len; - /* FIXME: layer2 and VLAN !!! */ - if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) - pkt_len += ETH_HLEN; - if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) - pkt_len += VLAN_HLEN; - /* does complete packet fit in current page ? */ - page_remainder = PAGE_SIZE - page_offset; - if (page_remainder < (sizeof(struct qeth_hdr) + pkt_len)){ - /* no -> go to start of next page */ - ctx->offset += page_remainder; - page = ctx->pages[ctx->offset >> PAGE_SHIFT]; - page_offset = 0; - } - memcpy(page + page_offset, &eddp->qh, sizeof(struct qeth_hdr)); - element->addr = page + page_offset; - element->length = sizeof(struct qeth_hdr); - ctx->offset += sizeof(struct qeth_hdr); - page_offset += sizeof(struct qeth_hdr); - /* add mac header (?) */ - if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2){ - memcpy(page + page_offset, &eddp->mac, ETH_HLEN); - element->length += ETH_HLEN; - ctx->offset += ETH_HLEN; - page_offset += ETH_HLEN; - } - /* add VLAN tag */ - if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)){ - memcpy(page + page_offset, &eddp->vlan, VLAN_HLEN); - element->length += VLAN_HLEN; - ctx->offset += VLAN_HLEN; - page_offset += VLAN_HLEN; - } - /* add network header */ - memcpy(page + page_offset, (u8 *)&eddp->nh, eddp->nhl); - element->length += eddp->nhl; - eddp->nh_in_ctx = page + page_offset; - ctx->offset += eddp->nhl; - page_offset += eddp->nhl; - /* add transport header */ - memcpy(page + page_offset, (u8 *)&eddp->th, eddp->thl); - element->length += eddp->thl; - eddp->th_in_ctx = page + page_offset; - ctx->offset += eddp->thl; -} - -static void -qeth_eddp_copy_data_tcp(char *dst, struct qeth_eddp_data *eddp, int len, - __wsum *hcsum) -{ - struct skb_frag_struct *frag; - int left_in_frag; - int copy_len; - u8 *src; - - QETH_DBF_TEXT(trace, 5, "eddpcdtc"); - if (skb_shinfo(eddp->skb)->nr_frags == 0) { - skb_copy_from_linear_data_offset(eddp->skb, eddp->skb_offset, - dst, len); - *hcsum = csum_partial(eddp->skb->data + eddp->skb_offset, len, - *hcsum); - eddp->skb_offset += len; - } else { - while (len > 0) { - if (eddp->frag < 0) { - /* we're in skb->data */ - left_in_frag = (eddp->skb->len - eddp->skb->data_len) - - eddp->skb_offset; - src = eddp->skb->data + eddp->skb_offset; - } else { - frag = &skb_shinfo(eddp->skb)-> - frags[eddp->frag]; - left_in_frag = frag->size - eddp->frag_offset; - src = (u8 *)( - (page_to_pfn(frag->page) << PAGE_SHIFT)+ - frag->page_offset + eddp->frag_offset); - } - if (left_in_frag <= 0) { - eddp->frag++; - eddp->frag_offset = 0; - continue; - } - copy_len = min(left_in_frag, len); - memcpy(dst, src, copy_len); - *hcsum = csum_partial(src, copy_len, *hcsum); - dst += copy_len; - eddp->frag_offset += copy_len; - eddp->skb_offset += copy_len; - len -= copy_len; - } - } -} - -static void -qeth_eddp_create_segment_data_tcp(struct qeth_eddp_context *ctx, - struct qeth_eddp_data *eddp, int data_len, - __wsum hcsum) -{ - u8 *page; - int page_remainder; - int page_offset; - struct qeth_eddp_element *element; - int first_lap = 1; - - QETH_DBF_TEXT(trace, 5, "eddpcsdt"); - page = ctx->pages[ctx->offset >> PAGE_SHIFT]; - page_offset = ctx->offset % PAGE_SIZE; - element = &ctx->elements[ctx->num_elements]; - while (data_len){ - page_remainder = PAGE_SIZE - page_offset; - if (page_remainder < data_len){ - qeth_eddp_copy_data_tcp(page + page_offset, eddp, - page_remainder, &hcsum); - element->length += page_remainder; - if (first_lap) - element->flags = SBAL_FLAGS_FIRST_FRAG; - else - element->flags = SBAL_FLAGS_MIDDLE_FRAG; - ctx->num_elements++; - element++; - data_len -= page_remainder; - ctx->offset += page_remainder; - page = ctx->pages[ctx->offset >> PAGE_SHIFT]; - page_offset = 0; - element->addr = page + page_offset; - } else { - qeth_eddp_copy_data_tcp(page + page_offset, eddp, - data_len, &hcsum); - element->length += data_len; - if (!first_lap) - element->flags = SBAL_FLAGS_LAST_FRAG; - ctx->num_elements++; - ctx->offset += data_len; - data_len = 0; - } - first_lap = 0; - } - ((struct tcphdr *)eddp->th_in_ctx)->check = csum_fold(hcsum); -} - -static __wsum -qeth_eddp_check_tcp4_hdr(struct qeth_eddp_data *eddp, int data_len) -{ - __wsum phcsum; /* pseudo header checksum */ - - QETH_DBF_TEXT(trace, 5, "eddpckt4"); - eddp->th.tcp.h.check = 0; - /* compute pseudo header checksum */ - phcsum = csum_tcpudp_nofold(eddp->nh.ip4.h.saddr, eddp->nh.ip4.h.daddr, - eddp->thl + data_len, IPPROTO_TCP, 0); - /* compute checksum of tcp header */ - return csum_partial((u8 *)&eddp->th, eddp->thl, phcsum); -} - -static __wsum -qeth_eddp_check_tcp6_hdr(struct qeth_eddp_data *eddp, int data_len) -{ - __be32 proto; - __wsum phcsum; /* pseudo header checksum */ - - QETH_DBF_TEXT(trace, 5, "eddpckt6"); - eddp->th.tcp.h.check = 0; - /* compute pseudo header checksum */ - phcsum = csum_partial((u8 *)&eddp->nh.ip6.h.saddr, - sizeof(struct in6_addr), 0); - phcsum = csum_partial((u8 *)&eddp->nh.ip6.h.daddr, - sizeof(struct in6_addr), phcsum); - proto = htonl(IPPROTO_TCP); - phcsum = csum_partial((u8 *)&proto, sizeof(u32), phcsum); - return phcsum; -} - -static struct qeth_eddp_data * -qeth_eddp_create_eddp_data(struct qeth_hdr *qh, u8 *nh, u8 nhl, u8 *th, u8 thl) -{ - struct qeth_eddp_data *eddp; - - QETH_DBF_TEXT(trace, 5, "eddpcrda"); - eddp = kzalloc(sizeof(struct qeth_eddp_data), GFP_ATOMIC); - if (eddp){ - eddp->nhl = nhl; - eddp->thl = thl; - memcpy(&eddp->qh, qh, sizeof(struct qeth_hdr)); - memcpy(&eddp->nh, nh, nhl); - memcpy(&eddp->th, th, thl); - eddp->frag = -1; /* initially we're in skb->data */ - } - return eddp; -} - -static void -__qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, - struct qeth_eddp_data *eddp) -{ - struct tcphdr *tcph; - int data_len; - __wsum hcsum; - - QETH_DBF_TEXT(trace, 5, "eddpftcp"); - eddp->skb_offset = sizeof(struct qeth_hdr) + eddp->nhl + eddp->thl; - if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { - eddp->skb_offset += sizeof(struct ethhdr); -#ifdef CONFIG_QETH_VLAN - if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) - eddp->skb_offset += VLAN_HLEN; -#endif /* CONFIG_QETH_VLAN */ - } - tcph = tcp_hdr(eddp->skb); - while (eddp->skb_offset < eddp->skb->len) { - data_len = min((int)skb_shinfo(eddp->skb)->gso_size, - (int)(eddp->skb->len - eddp->skb_offset)); - /* prepare qdio hdr */ - if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2){ - eddp->qh.hdr.l2.pkt_length = data_len + ETH_HLEN + - eddp->nhl + eddp->thl; -#ifdef CONFIG_QETH_VLAN - if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) - eddp->qh.hdr.l2.pkt_length += VLAN_HLEN; -#endif /* CONFIG_QETH_VLAN */ - } else - eddp->qh.hdr.l3.length = data_len + eddp->nhl + - eddp->thl; - /* prepare ip hdr */ - if (eddp->skb->protocol == htons(ETH_P_IP)){ - eddp->nh.ip4.h.tot_len = htons(data_len + eddp->nhl + - eddp->thl); - eddp->nh.ip4.h.check = 0; - eddp->nh.ip4.h.check = - ip_fast_csum((u8 *)&eddp->nh.ip4.h, - eddp->nh.ip4.h.ihl); - } else - eddp->nh.ip6.h.payload_len = htons(data_len + eddp->thl); - /* prepare tcp hdr */ - if (data_len == (eddp->skb->len - eddp->skb_offset)){ - /* last segment -> set FIN and PSH flags */ - eddp->th.tcp.h.fin = tcph->fin; - eddp->th.tcp.h.psh = tcph->psh; - } - if (eddp->skb->protocol == htons(ETH_P_IP)) - hcsum = qeth_eddp_check_tcp4_hdr(eddp, data_len); - else - hcsum = qeth_eddp_check_tcp6_hdr(eddp, data_len); - /* fill the next segment into the context */ - qeth_eddp_create_segment_hdrs(ctx, eddp, data_len); - qeth_eddp_create_segment_data_tcp(ctx, eddp, data_len, hcsum); - if (eddp->skb_offset >= eddp->skb->len) - break; - /* prepare headers for next round */ - if (eddp->skb->protocol == htons(ETH_P_IP)) - eddp->nh.ip4.h.id = htons(ntohs(eddp->nh.ip4.h.id) + 1); - eddp->th.tcp.h.seq = htonl(ntohl(eddp->th.tcp.h.seq) + data_len); - } -} - -static int -qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, - struct sk_buff *skb, struct qeth_hdr *qhdr) -{ - struct qeth_eddp_data *eddp = NULL; - - QETH_DBF_TEXT(trace, 5, "eddpficx"); - /* create our segmentation headers and copy original headers */ - if (skb->protocol == htons(ETH_P_IP)) - eddp = qeth_eddp_create_eddp_data(qhdr, - skb_network_header(skb), - ip_hdrlen(skb), - skb_transport_header(skb), - tcp_hdrlen(skb)); - else - eddp = qeth_eddp_create_eddp_data(qhdr, - skb_network_header(skb), - sizeof(struct ipv6hdr), - skb_transport_header(skb), - tcp_hdrlen(skb)); - - if (eddp == NULL) { - QETH_DBF_TEXT(trace, 2, "eddpfcnm"); - return -ENOMEM; - } - if (qhdr->hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { - skb_set_mac_header(skb, sizeof(struct qeth_hdr)); - memcpy(&eddp->mac, eth_hdr(skb), ETH_HLEN); -#ifdef CONFIG_QETH_VLAN - if (eddp->mac.h_proto == __constant_htons(ETH_P_8021Q)) { - eddp->vlan[0] = skb->protocol; - eddp->vlan[1] = htons(vlan_tx_tag_get(skb)); - } -#endif /* CONFIG_QETH_VLAN */ - } - /* the next flags will only be set on the last segment */ - eddp->th.tcp.h.fin = 0; - eddp->th.tcp.h.psh = 0; - eddp->skb = skb; - /* begin segmentation and fill context */ - __qeth_eddp_fill_context_tcp(ctx, eddp); - kfree(eddp); - return 0; -} - -static void -qeth_eddp_calc_num_pages(struct qeth_eddp_context *ctx, struct sk_buff *skb, - int hdr_len) -{ - int skbs_per_page; - - QETH_DBF_TEXT(trace, 5, "eddpcanp"); - /* can we put multiple skbs in one page? */ - skbs_per_page = PAGE_SIZE / (skb_shinfo(skb)->gso_size + hdr_len); - if (skbs_per_page > 1){ - ctx->num_pages = (skb_shinfo(skb)->gso_segs + 1) / - skbs_per_page + 1; - ctx->elements_per_skb = 1; - } else { - /* no -> how many elements per skb? */ - ctx->elements_per_skb = (skb_shinfo(skb)->gso_size + hdr_len + - PAGE_SIZE) >> PAGE_SHIFT; - ctx->num_pages = ctx->elements_per_skb * - (skb_shinfo(skb)->gso_segs + 1); - } - ctx->num_elements = ctx->elements_per_skb * - (skb_shinfo(skb)->gso_segs + 1); -} - -static struct qeth_eddp_context * -qeth_eddp_create_context_generic(struct qeth_card *card, struct sk_buff *skb, - int hdr_len) -{ - struct qeth_eddp_context *ctx = NULL; - u8 *addr; - int i; - - QETH_DBF_TEXT(trace, 5, "creddpcg"); - /* create the context and allocate pages */ - ctx = kzalloc(sizeof(struct qeth_eddp_context), GFP_ATOMIC); - if (ctx == NULL){ - QETH_DBF_TEXT(trace, 2, "ceddpcn1"); - return NULL; - } - ctx->type = QETH_LARGE_SEND_EDDP; - qeth_eddp_calc_num_pages(ctx, skb, hdr_len); - if (ctx->elements_per_skb > QETH_MAX_BUFFER_ELEMENTS(card)){ - QETH_DBF_TEXT(trace, 2, "ceddpcis"); - kfree(ctx); - return NULL; - } - ctx->pages = kcalloc(ctx->num_pages, sizeof(u8 *), GFP_ATOMIC); - if (ctx->pages == NULL){ - QETH_DBF_TEXT(trace, 2, "ceddpcn2"); - kfree(ctx); - return NULL; - } - for (i = 0; i < ctx->num_pages; ++i){ - addr = (u8 *)__get_free_page(GFP_ATOMIC); - if (addr == NULL){ - QETH_DBF_TEXT(trace, 2, "ceddpcn3"); - ctx->num_pages = i; - qeth_eddp_free_context(ctx); - return NULL; - } - memset(addr, 0, PAGE_SIZE); - ctx->pages[i] = addr; - } - ctx->elements = kcalloc(ctx->num_elements, - sizeof(struct qeth_eddp_element), GFP_ATOMIC); - if (ctx->elements == NULL){ - QETH_DBF_TEXT(trace, 2, "ceddpcn4"); - qeth_eddp_free_context(ctx); - return NULL; - } - /* reset num_elements; will be incremented again in fill_buffer to - * reflect number of actually used elements */ - ctx->num_elements = 0; - return ctx; -} - -static struct qeth_eddp_context * -qeth_eddp_create_context_tcp(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *qhdr) -{ - struct qeth_eddp_context *ctx = NULL; - - QETH_DBF_TEXT(trace, 5, "creddpct"); - if (skb->protocol == htons(ETH_P_IP)) - ctx = qeth_eddp_create_context_generic(card, skb, - (sizeof(struct qeth_hdr) + - ip_hdrlen(skb) + - tcp_hdrlen(skb))); - else if (skb->protocol == htons(ETH_P_IPV6)) - ctx = qeth_eddp_create_context_generic(card, skb, - sizeof(struct qeth_hdr) + sizeof(struct ipv6hdr) + - tcp_hdrlen(skb)); - else - QETH_DBF_TEXT(trace, 2, "cetcpinv"); - - if (ctx == NULL) { - QETH_DBF_TEXT(trace, 2, "creddpnl"); - return NULL; - } - if (qeth_eddp_fill_context_tcp(ctx, skb, qhdr)){ - QETH_DBF_TEXT(trace, 2, "ceddptfe"); - qeth_eddp_free_context(ctx); - return NULL; - } - atomic_set(&ctx->refcnt, 1); - return ctx; -} - -struct qeth_eddp_context * -qeth_eddp_create_context(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *qhdr, unsigned char sk_protocol) -{ - QETH_DBF_TEXT(trace, 5, "creddpc"); - switch (sk_protocol) { - case IPPROTO_TCP: - return qeth_eddp_create_context_tcp(card, skb, qhdr); - default: - QETH_DBF_TEXT(trace, 2, "eddpinvp"); - } - return NULL; -} diff --git a/drivers/s390/net/qeth_eddp.h b/drivers/s390/net/qeth_eddp.h deleted file mode 100644 index 52910c9252c0..000000000000 --- a/drivers/s390/net/qeth_eddp.h +++ /dev/null @@ -1,84 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_eddp.h - * - * Header file for qeth enhanced device driver packing. - * - * Copyright 2004 IBM Corporation - * - * Author(s): Thomas Spatzier - * - */ -#ifndef __QETH_EDDP_H__ -#define __QETH_EDDP_H__ - -struct qeth_eddp_element { - u32 flags; - u32 length; - void *addr; -}; - -struct qeth_eddp_context { - atomic_t refcnt; - enum qeth_large_send_types type; - int num_pages; /* # of allocated pages */ - u8 **pages; /* pointers to pages */ - int offset; /* offset in ctx during creation */ - int num_elements; /* # of required 'SBALEs' */ - struct qeth_eddp_element *elements; /* array of 'SBALEs' */ - int elements_per_skb; /* # of 'SBALEs' per skb **/ -}; - -struct qeth_eddp_context_reference { - struct list_head list; - struct qeth_eddp_context *ctx; -}; - -extern struct qeth_eddp_context * -qeth_eddp_create_context(struct qeth_card *,struct sk_buff *, - struct qeth_hdr *, unsigned char); - -extern void -qeth_eddp_put_context(struct qeth_eddp_context *); - -extern int -qeth_eddp_fill_buffer(struct qeth_qdio_out_q *,struct qeth_eddp_context *,int); - -extern void -qeth_eddp_buf_release_contexts(struct qeth_qdio_out_buffer *); - -extern int -qeth_eddp_check_buffers_for_context(struct qeth_qdio_out_q *, - struct qeth_eddp_context *); -/* - * Data used for fragmenting a IP packet. - */ -struct qeth_eddp_data { - struct qeth_hdr qh; - struct ethhdr mac; - __be16 vlan[2]; - union { - struct { - struct iphdr h; - u8 options[40]; - } ip4; - struct { - struct ipv6hdr h; - } ip6; - } nh; - u8 nhl; - void *nh_in_ctx; /* address of nh within the ctx */ - union { - struct { - struct tcphdr h; - u8 options[40]; - } tcp; - } th; - u8 thl; - void *th_in_ctx; /* address of th within the ctx */ - struct sk_buff *skb; - int skb_offset; - int frag; - int frag_offset; -} __attribute__ ((packed)); - -#endif /* __QETH_EDDP_H__ */ diff --git a/drivers/s390/net/qeth_fs.h b/drivers/s390/net/qeth_fs.h deleted file mode 100644 index 61faf05517d6..000000000000 --- a/drivers/s390/net/qeth_fs.h +++ /dev/null @@ -1,168 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_fs.h - * - * Linux on zSeries OSA Express and HiperSockets support. - * - * This header file contains definitions related to sysfs and procfs. - * - * Copyright 2000,2003 IBM Corporation - * Author(s): Thomas Spatzier - * - */ -#ifndef __QETH_FS_H__ -#define __QETH_FS_H__ - -#ifdef CONFIG_PROC_FS -extern int -qeth_create_procfs_entries(void); - -extern void -qeth_remove_procfs_entries(void); -#else -static inline int -qeth_create_procfs_entries(void) -{ - return 0; -} - -static inline void -qeth_remove_procfs_entries(void) -{ -} -#endif /* CONFIG_PROC_FS */ - -extern int -qeth_create_device_attributes(struct device *dev); - -extern void -qeth_remove_device_attributes(struct device *dev); - -extern int -qeth_create_device_attributes_osn(struct device *dev); - -extern void -qeth_remove_device_attributes_osn(struct device *dev); - -extern int -qeth_create_driver_attributes(void); - -extern void -qeth_remove_driver_attributes(void); - -/* - * utility functions used in qeth_proc.c and qeth_sys.c - */ - -static inline const char * -qeth_get_checksum_str(struct qeth_card *card) -{ - if (card->options.checksum_type == SW_CHECKSUMMING) - return "sw"; - else if (card->options.checksum_type == HW_CHECKSUMMING) - return "hw"; - else - return "no"; -} - -static inline const char * -qeth_get_prioq_str(struct qeth_card *card, char *buf) -{ - if (card->qdio.do_prio_queueing == QETH_NO_PRIO_QUEUEING) - sprintf(buf, "always_q_%i", card->qdio.default_out_queue); - else - strcpy(buf, (card->qdio.do_prio_queueing == - QETH_PRIO_Q_ING_PREC)? - "by_prec." : "by_ToS"); - return buf; -} - -static inline const char * -qeth_get_bufsize_str(struct qeth_card *card) -{ - if (card->qdio.in_buf_size == 16384) - return "16k"; - else if (card->qdio.in_buf_size == 24576) - return "24k"; - else if (card->qdio.in_buf_size == 32768) - return "32k"; - else if (card->qdio.in_buf_size == 40960) - return "40k"; - else - return "64k"; -} - -static inline const char * -qeth_get_cardname(struct qeth_card *card) -{ - if (card->info.guestlan) { - switch (card->info.type) { - case QETH_CARD_TYPE_OSAE: - return " Guest LAN QDIO"; - case QETH_CARD_TYPE_IQD: - return " Guest LAN Hiper"; - default: - return " unknown"; - } - } else { - switch (card->info.type) { - case QETH_CARD_TYPE_OSAE: - return " OSD Express"; - case QETH_CARD_TYPE_IQD: - return " HiperSockets"; - case QETH_CARD_TYPE_OSN: - return " OSN QDIO"; - default: - return " unknown"; - } - } - return " n/a"; -} - -/* max length to be returned: 14 */ -static inline const char * -qeth_get_cardname_short(struct qeth_card *card) -{ - if (card->info.guestlan){ - switch (card->info.type){ - case QETH_CARD_TYPE_OSAE: - return "GuestLAN QDIO"; - case QETH_CARD_TYPE_IQD: - return "GuestLAN Hiper"; - default: - return "unknown"; - } - } else { - switch (card->info.type) { - case QETH_CARD_TYPE_OSAE: - switch (card->info.link_type) { - case QETH_LINK_TYPE_FAST_ETH: - return "OSD_100"; - case QETH_LINK_TYPE_HSTR: - return "HSTR"; - case QETH_LINK_TYPE_GBIT_ETH: - return "OSD_1000"; - case QETH_LINK_TYPE_10GBIT_ETH: - return "OSD_10GIG"; - case QETH_LINK_TYPE_LANE_ETH100: - return "OSD_FE_LANE"; - case QETH_LINK_TYPE_LANE_TR: - return "OSD_TR_LANE"; - case QETH_LINK_TYPE_LANE_ETH1000: - return "OSD_GbE_LANE"; - case QETH_LINK_TYPE_LANE: - return "OSD_ATM_LANE"; - default: - return "OSD_Express"; - } - case QETH_CARD_TYPE_IQD: - return "HiperSockets"; - case QETH_CARD_TYPE_OSN: - return "OSN"; - default: - return "unknown"; - } - } - return "n/a"; -} - -#endif /* __QETH_FS_H__ */ diff --git a/drivers/s390/net/qeth_main.c b/drivers/s390/net/qeth_main.c deleted file mode 100644 index 62606ce26e55..000000000000 --- a/drivers/s390/net/qeth_main.c +++ /dev/null @@ -1,8956 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_main.c - * - * Linux on zSeries OSA Express and HiperSockets support - * - * Copyright 2000,2003 IBM Corporation - * - * Author(s): Original Code written by - * Utz Bacher (utz.bacher@de.ibm.com) - * Rewritten by - * Frank Pavlic (fpavlic@de.ibm.com) and - * Thomas Spatzier - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation; either version 2, or (at your option) - * any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - */ - - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include - -#include "qeth.h" -#include "qeth_mpc.h" -#include "qeth_fs.h" -#include "qeth_eddp.h" -#include "qeth_tso.h" - -static const char *version = "qeth S/390 OSA-Express driver"; - -/** - * Debug Facility Stuff - */ -static debug_info_t *qeth_dbf_setup = NULL; -static debug_info_t *qeth_dbf_data = NULL; -static debug_info_t *qeth_dbf_misc = NULL; -static debug_info_t *qeth_dbf_control = NULL; -debug_info_t *qeth_dbf_trace = NULL; -static debug_info_t *qeth_dbf_sense = NULL; -static debug_info_t *qeth_dbf_qerr = NULL; - -DEFINE_PER_CPU(char[256], qeth_dbf_txt_buf); - -static struct lock_class_key qdio_out_skb_queue_key; - -/** - * some more definitions and declarations - */ -static unsigned int known_devices[][10] = QETH_MODELLIST_ARRAY; - -/* list of our cards */ -struct qeth_card_list_struct qeth_card_list; -/*process list want to be notified*/ -spinlock_t qeth_notify_lock; -struct list_head qeth_notify_list; - -static void qeth_send_control_data_cb(struct qeth_channel *, - struct qeth_cmd_buffer *); - -/** - * here we go with function implementation - */ -static void -qeth_init_qdio_info(struct qeth_card *card); - -static int -qeth_init_qdio_queues(struct qeth_card *card); - -static int -qeth_alloc_qdio_buffers(struct qeth_card *card); - -static void -qeth_free_qdio_buffers(struct qeth_card *); - -static void -qeth_clear_qdio_buffers(struct qeth_card *); - -static void -qeth_clear_ip_list(struct qeth_card *, int, int); - -static void -qeth_clear_ipacmd_list(struct qeth_card *); - -static int -qeth_qdio_clear_card(struct qeth_card *, int); - -static void -qeth_clear_working_pool_list(struct qeth_card *); - -static void -qeth_clear_cmd_buffers(struct qeth_channel *); - -static int -qeth_stop(struct net_device *); - -static void -qeth_clear_ipato_list(struct qeth_card *); - -static int -qeth_is_addr_covered_by_ipato(struct qeth_card *, struct qeth_ipaddr *); - -static void -qeth_irq_tasklet(unsigned long); - -static int -qeth_set_online(struct ccwgroup_device *); - -static int -__qeth_set_online(struct ccwgroup_device *gdev, int recovery_mode); - -static struct qeth_ipaddr * -qeth_get_addr_buffer(enum qeth_prot_versions); - -static void -qeth_set_multicast_list(struct net_device *); - -static void -qeth_setadp_promisc_mode(struct qeth_card *); - -static int -qeth_hard_header_parse(const struct sk_buff *skb, unsigned char *haddr); - -static void -qeth_notify_processes(void) -{ - /*notify all registered processes */ - struct qeth_notify_list_struct *n_entry; - - QETH_DBF_TEXT(trace,3,"procnoti"); - spin_lock(&qeth_notify_lock); - list_for_each_entry(n_entry, &qeth_notify_list, list) { - send_sig(n_entry->signum, n_entry->task, 1); - } - spin_unlock(&qeth_notify_lock); - -} -int -qeth_notifier_unregister(struct task_struct *p) -{ - struct qeth_notify_list_struct *n_entry, *tmp; - - QETH_DBF_TEXT(trace, 2, "notunreg"); - spin_lock(&qeth_notify_lock); - list_for_each_entry_safe(n_entry, tmp, &qeth_notify_list, list) { - if (n_entry->task == p) { - list_del(&n_entry->list); - kfree(n_entry); - goto out; - } - } -out: - spin_unlock(&qeth_notify_lock); - return 0; -} -int -qeth_notifier_register(struct task_struct *p, int signum) -{ - struct qeth_notify_list_struct *n_entry; - - /*check first if entry already exists*/ - spin_lock(&qeth_notify_lock); - list_for_each_entry(n_entry, &qeth_notify_list, list) { - if (n_entry->task == p) { - n_entry->signum = signum; - spin_unlock(&qeth_notify_lock); - return 0; - } - } - spin_unlock(&qeth_notify_lock); - - n_entry = (struct qeth_notify_list_struct *) - kmalloc(sizeof(struct qeth_notify_list_struct),GFP_KERNEL); - if (!n_entry) - return -ENOMEM; - n_entry->task = p; - n_entry->signum = signum; - spin_lock(&qeth_notify_lock); - list_add(&n_entry->list,&qeth_notify_list); - spin_unlock(&qeth_notify_lock); - return 0; -} - - -/** - * free channel command buffers - */ -static void -qeth_clean_channel(struct qeth_channel *channel) -{ - int cnt; - - QETH_DBF_TEXT(setup, 2, "freech"); - for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) - kfree(channel->iob[cnt].data); -} - -/** - * free card - */ -static void -qeth_free_card(struct qeth_card *card) -{ - - QETH_DBF_TEXT(setup, 2, "freecrd"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); - qeth_clean_channel(&card->read); - qeth_clean_channel(&card->write); - if (card->dev) - free_netdev(card->dev); - qeth_clear_ip_list(card, 0, 0); - qeth_clear_ipato_list(card); - kfree(card->ip_tbd_list); - qeth_free_qdio_buffers(card); - kfree(card); -} - -/** - * alloc memory for command buffer per channel - */ -static int -qeth_setup_channel(struct qeth_channel *channel) -{ - int cnt; - - QETH_DBF_TEXT(setup, 2, "setupch"); - for (cnt=0; cnt < QETH_CMD_BUFFER_NO; cnt++) { - channel->iob[cnt].data = (char *) - kmalloc(QETH_BUFSIZE, GFP_DMA|GFP_KERNEL); - if (channel->iob[cnt].data == NULL) - break; - channel->iob[cnt].state = BUF_STATE_FREE; - channel->iob[cnt].channel = channel; - channel->iob[cnt].callback = qeth_send_control_data_cb; - channel->iob[cnt].rc = 0; - } - if (cnt < QETH_CMD_BUFFER_NO) { - while (cnt-- > 0) - kfree(channel->iob[cnt].data); - return -ENOMEM; - } - channel->buf_no = 0; - channel->io_buf_no = 0; - atomic_set(&channel->irq_pending, 0); - spin_lock_init(&channel->iob_lock); - - init_waitqueue_head(&channel->wait_q); - channel->irq_tasklet.data = (unsigned long) channel; - channel->irq_tasklet.func = qeth_irq_tasklet; - return 0; -} - -/** - * alloc memory for card structure - */ -static struct qeth_card * -qeth_alloc_card(void) -{ - struct qeth_card *card; - - QETH_DBF_TEXT(setup, 2, "alloccrd"); - card = kzalloc(sizeof(struct qeth_card), GFP_DMA|GFP_KERNEL); - if (!card) - return NULL; - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); - if (qeth_setup_channel(&card->read)) { - kfree(card); - return NULL; - } - if (qeth_setup_channel(&card->write)) { - qeth_clean_channel(&card->read); - kfree(card); - return NULL; - } - return card; -} - -static long -__qeth_check_irb_error(struct ccw_device *cdev, unsigned long intparm, - struct irb *irb) -{ - if (!IS_ERR(irb)) - return 0; - - switch (PTR_ERR(irb)) { - case -EIO: - PRINT_WARN("i/o-error on device %s\n", cdev->dev.bus_id); - QETH_DBF_TEXT(trace, 2, "ckirberr"); - QETH_DBF_TEXT_(trace, 2, " rc%d", -EIO); - break; - case -ETIMEDOUT: - PRINT_WARN("timeout on device %s\n", cdev->dev.bus_id); - QETH_DBF_TEXT(trace, 2, "ckirberr"); - QETH_DBF_TEXT_(trace, 2, " rc%d", -ETIMEDOUT); - if (intparm == QETH_RCD_PARM) { - struct qeth_card *card = CARD_FROM_CDEV(cdev); - - if (card && (card->data.ccwdev == cdev)) { - card->data.state = CH_STATE_DOWN; - wake_up(&card->wait_q); - } - } - break; - default: - PRINT_WARN("unknown error %ld on device %s\n", PTR_ERR(irb), - cdev->dev.bus_id); - QETH_DBF_TEXT(trace, 2, "ckirberr"); - QETH_DBF_TEXT(trace, 2, " rc???"); - } - return PTR_ERR(irb); -} - -static int -qeth_get_problem(struct ccw_device *cdev, struct irb *irb) -{ - int dstat,cstat; - char *sense; - - sense = (char *) irb->ecw; - cstat = irb->scsw.cstat; - dstat = irb->scsw.dstat; - - if (cstat & (SCHN_STAT_CHN_CTRL_CHK | SCHN_STAT_INTF_CTRL_CHK | - SCHN_STAT_CHN_DATA_CHK | SCHN_STAT_CHAIN_CHECK | - SCHN_STAT_PROT_CHECK | SCHN_STAT_PROG_CHECK)) { - QETH_DBF_TEXT(trace,2, "CGENCHK"); - PRINT_WARN("check on device %s, dstat=x%x, cstat=x%x ", - cdev->dev.bus_id, dstat, cstat); - HEXDUMP16(WARN, "irb: ", irb); - HEXDUMP16(WARN, "irb: ", ((char *) irb) + 32); - return 1; - } - - if (dstat & DEV_STAT_UNIT_CHECK) { - if (sense[SENSE_RESETTING_EVENT_BYTE] & - SENSE_RESETTING_EVENT_FLAG) { - QETH_DBF_TEXT(trace,2,"REVIND"); - return 1; - } - if (sense[SENSE_COMMAND_REJECT_BYTE] & - SENSE_COMMAND_REJECT_FLAG) { - QETH_DBF_TEXT(trace,2,"CMDREJi"); - return 0; - } - if ((sense[2] == 0xaf) && (sense[3] == 0xfe)) { - QETH_DBF_TEXT(trace,2,"AFFE"); - return 1; - } - if ((!sense[0]) && (!sense[1]) && (!sense[2]) && (!sense[3])) { - QETH_DBF_TEXT(trace,2,"ZEROSEN"); - return 0; - } - QETH_DBF_TEXT(trace,2,"DGENCHK"); - return 1; - } - return 0; -} -static int qeth_issue_next_read(struct qeth_card *); - -/** - * interrupt handler - */ -static void -qeth_irq(struct ccw_device *cdev, unsigned long intparm, struct irb *irb) -{ - int rc; - int cstat,dstat; - struct qeth_cmd_buffer *buffer; - struct qeth_channel *channel; - struct qeth_card *card; - - QETH_DBF_TEXT(trace,5,"irq"); - - if (__qeth_check_irb_error(cdev, intparm, irb)) - return; - cstat = irb->scsw.cstat; - dstat = irb->scsw.dstat; - - card = CARD_FROM_CDEV(cdev); - if (!card) - return; - - if (card->read.ccwdev == cdev){ - channel = &card->read; - QETH_DBF_TEXT(trace,5,"read"); - } else if (card->write.ccwdev == cdev) { - channel = &card->write; - QETH_DBF_TEXT(trace,5,"write"); - } else { - channel = &card->data; - QETH_DBF_TEXT(trace,5,"data"); - } - atomic_set(&channel->irq_pending, 0); - - if (irb->scsw.fctl & (SCSW_FCTL_CLEAR_FUNC)) - channel->state = CH_STATE_STOPPED; - - if (irb->scsw.fctl & (SCSW_FCTL_HALT_FUNC)) - channel->state = CH_STATE_HALTED; - - /*let's wake up immediately on data channel*/ - if ((channel == &card->data) && (intparm != 0) && - (intparm != QETH_RCD_PARM)) - goto out; - - if (intparm == QETH_CLEAR_CHANNEL_PARM) { - QETH_DBF_TEXT(trace, 6, "clrchpar"); - /* we don't have to handle this further */ - intparm = 0; - } - if (intparm == QETH_HALT_CHANNEL_PARM) { - QETH_DBF_TEXT(trace, 6, "hltchpar"); - /* we don't have to handle this further */ - intparm = 0; - } - if ((dstat & DEV_STAT_UNIT_EXCEP) || - (dstat & DEV_STAT_UNIT_CHECK) || - (cstat)) { - if (irb->esw.esw0.erw.cons) { - /* TODO: we should make this s390dbf */ - PRINT_WARN("sense data available on channel %s.\n", - CHANNEL_ID(channel)); - PRINT_WARN(" cstat 0x%X\n dstat 0x%X\n", cstat, dstat); - HEXDUMP16(WARN,"irb: ",irb); - HEXDUMP16(WARN,"sense data: ",irb->ecw); - } - if (intparm == QETH_RCD_PARM) { - channel->state = CH_STATE_DOWN; - goto out; - } - rc = qeth_get_problem(cdev,irb); - if (rc) { - qeth_schedule_recovery(card); - goto out; - } - } - - if (intparm == QETH_RCD_PARM) { - channel->state = CH_STATE_RCD_DONE; - goto out; - } - if (intparm) { - buffer = (struct qeth_cmd_buffer *) __va((addr_t)intparm); - buffer->state = BUF_STATE_PROCESSED; - } - if (channel == &card->data) - return; - - if (channel == &card->read && - channel->state == CH_STATE_UP) - qeth_issue_next_read(card); - - qeth_irq_tasklet((unsigned long)channel); - return; -out: - wake_up(&card->wait_q); -} - -/** - * tasklet function scheduled from irq handler - */ -static void -qeth_irq_tasklet(unsigned long data) -{ - struct qeth_card *card; - struct qeth_channel *channel; - struct qeth_cmd_buffer *iob; - __u8 index; - - QETH_DBF_TEXT(trace,5,"irqtlet"); - channel = (struct qeth_channel *) data; - iob = channel->iob; - index = channel->buf_no; - card = CARD_FROM_CDEV(channel->ccwdev); - while (iob[index].state == BUF_STATE_PROCESSED) { - if (iob[index].callback !=NULL) { - iob[index].callback(channel,iob + index); - } - index = (index + 1) % QETH_CMD_BUFFER_NO; - } - channel->buf_no = index; - wake_up(&card->wait_q); -} - -static int qeth_stop_card(struct qeth_card *, int); - -static int -__qeth_set_offline(struct ccwgroup_device *cgdev, int recovery_mode) -{ - struct qeth_card *card = (struct qeth_card *) cgdev->dev.driver_data; - int rc = 0, rc2 = 0, rc3 = 0; - enum qeth_card_states recover_flag; - - QETH_DBF_TEXT(setup, 3, "setoffl"); - QETH_DBF_HEX(setup, 3, &card, sizeof(void *)); - - if (card->dev && netif_carrier_ok(card->dev)) - netif_carrier_off(card->dev); - recover_flag = card->state; - if (qeth_stop_card(card, recovery_mode) == -ERESTARTSYS){ - PRINT_WARN("Stopping card %s interrupted by user!\n", - CARD_BUS_ID(card)); - return -ERESTARTSYS; - } - rc = ccw_device_set_offline(CARD_DDEV(card)); - rc2 = ccw_device_set_offline(CARD_WDEV(card)); - rc3 = ccw_device_set_offline(CARD_RDEV(card)); - if (!rc) - rc = (rc2) ? rc2 : rc3; - if (rc) - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - if (recover_flag == CARD_STATE_UP) - card->state = CARD_STATE_RECOVER; - qeth_notify_processes(); - return 0; -} - -static int -qeth_set_offline(struct ccwgroup_device *cgdev) -{ - return __qeth_set_offline(cgdev, 0); -} - -static int -qeth_threads_running(struct qeth_card *card, unsigned long threads); - - -static void -qeth_remove_device(struct ccwgroup_device *cgdev) -{ - struct qeth_card *card = (struct qeth_card *) cgdev->dev.driver_data; - unsigned long flags; - - QETH_DBF_TEXT(setup, 3, "rmdev"); - QETH_DBF_HEX(setup, 3, &card, sizeof(void *)); - - if (!card) - return; - - wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0); - - if (cgdev->state == CCWGROUP_ONLINE){ - card->use_hard_stop = 1; - qeth_set_offline(cgdev); - } - /* remove form our internal list */ - write_lock_irqsave(&qeth_card_list.rwlock, flags); - list_del(&card->list); - write_unlock_irqrestore(&qeth_card_list.rwlock, flags); - if (card->dev) - unregister_netdev(card->dev); - qeth_remove_device_attributes(&cgdev->dev); - qeth_free_card(card); - cgdev->dev.driver_data = NULL; - put_device(&cgdev->dev); -} - -static int -qeth_register_addr_entry(struct qeth_card *, struct qeth_ipaddr *); -static int -qeth_deregister_addr_entry(struct qeth_card *, struct qeth_ipaddr *); - -/** - * Add/remove address to/from card's ip list, i.e. try to add or remove - * reference to/from an IP address that is already registered on the card. - * Returns: - * 0 address was on card and its reference count has been adjusted, - * but is still > 0, so nothing has to be done - * also returns 0 if card was not on card and the todo was to delete - * the address -> there is also nothing to be done - * 1 address was not on card and the todo is to add it to the card's ip - * list - * -1 address was on card and its reference count has been decremented - * to <= 0 by the todo -> address must be removed from card - */ -static int -__qeth_ref_ip_on_card(struct qeth_card *card, struct qeth_ipaddr *todo, - struct qeth_ipaddr **__addr) -{ - struct qeth_ipaddr *addr; - int found = 0; - - list_for_each_entry(addr, &card->ip_list, entry) { - if (card->options.layer2) { - if ((addr->type == todo->type) && - (memcmp(&addr->mac, &todo->mac, - OSA_ADDR_LEN) == 0)) { - found = 1; - break; - } - continue; - } - if ((addr->proto == QETH_PROT_IPV4) && - (todo->proto == QETH_PROT_IPV4) && - (addr->type == todo->type) && - (addr->u.a4.addr == todo->u.a4.addr) && - (addr->u.a4.mask == todo->u.a4.mask)) { - found = 1; - break; - } - if ((addr->proto == QETH_PROT_IPV6) && - (todo->proto == QETH_PROT_IPV6) && - (addr->type == todo->type) && - (addr->u.a6.pfxlen == todo->u.a6.pfxlen) && - (memcmp(&addr->u.a6.addr, &todo->u.a6.addr, - sizeof(struct in6_addr)) == 0)) { - found = 1; - break; - } - } - if (found) { - addr->users += todo->users; - if (addr->users <= 0){ - *__addr = addr; - return -1; - } else { - /* for VIPA and RXIP limit refcount to 1 */ - if (addr->type != QETH_IP_TYPE_NORMAL) - addr->users = 1; - return 0; - } - } - if (todo->users > 0) { - /* for VIPA and RXIP limit refcount to 1 */ - if (todo->type != QETH_IP_TYPE_NORMAL) - todo->users = 1; - return 1; - } else - return 0; -} - -static int -__qeth_address_exists_in_list(struct list_head *list, struct qeth_ipaddr *addr, - int same_type) -{ - struct qeth_ipaddr *tmp; - - list_for_each_entry(tmp, list, entry) { - if ((tmp->proto == QETH_PROT_IPV4) && - (addr->proto == QETH_PROT_IPV4) && - ((same_type && (tmp->type == addr->type)) || - (!same_type && (tmp->type != addr->type)) ) && - (tmp->u.a4.addr == addr->u.a4.addr) ){ - return 1; - } - if ((tmp->proto == QETH_PROT_IPV6) && - (addr->proto == QETH_PROT_IPV6) && - ((same_type && (tmp->type == addr->type)) || - (!same_type && (tmp->type != addr->type)) ) && - (memcmp(&tmp->u.a6.addr, &addr->u.a6.addr, - sizeof(struct in6_addr)) == 0) ) { - return 1; - } - } - return 0; -} - -/* - * Add IP to be added to todo list. If there is already an "add todo" - * in this list we just incremenent the reference count. - * Returns 0 if we just incremented reference count. - */ -static int -__qeth_insert_ip_todo(struct qeth_card *card, struct qeth_ipaddr *addr, int add) -{ - struct qeth_ipaddr *tmp, *t; - int found = 0; - - list_for_each_entry_safe(tmp, t, card->ip_tbd_list, entry) { - if ((addr->type == QETH_IP_TYPE_DEL_ALL_MC) && - (tmp->type == QETH_IP_TYPE_DEL_ALL_MC)) - return 0; - if (card->options.layer2) { - if ((tmp->type == addr->type) && - (tmp->is_multicast == addr->is_multicast) && - (memcmp(&tmp->mac, &addr->mac, - OSA_ADDR_LEN) == 0)) { - found = 1; - break; - } - continue; - } - if ((tmp->proto == QETH_PROT_IPV4) && - (addr->proto == QETH_PROT_IPV4) && - (tmp->type == addr->type) && - (tmp->is_multicast == addr->is_multicast) && - (tmp->u.a4.addr == addr->u.a4.addr) && - (tmp->u.a4.mask == addr->u.a4.mask)) { - found = 1; - break; - } - if ((tmp->proto == QETH_PROT_IPV6) && - (addr->proto == QETH_PROT_IPV6) && - (tmp->type == addr->type) && - (tmp->is_multicast == addr->is_multicast) && - (tmp->u.a6.pfxlen == addr->u.a6.pfxlen) && - (memcmp(&tmp->u.a6.addr, &addr->u.a6.addr, - sizeof(struct in6_addr)) == 0)) { - found = 1; - break; - } - } - if (found){ - if (addr->users != 0) - tmp->users += addr->users; - else - tmp->users += add? 1:-1; - if (tmp->users == 0) { - list_del(&tmp->entry); - kfree(tmp); - } - return 0; - } else { - if (addr->type == QETH_IP_TYPE_DEL_ALL_MC) - list_add(&addr->entry, card->ip_tbd_list); - else { - if (addr->users == 0) - addr->users += add? 1:-1; - if (add && (addr->type == QETH_IP_TYPE_NORMAL) && - qeth_is_addr_covered_by_ipato(card, addr)){ - QETH_DBF_TEXT(trace, 2, "tkovaddr"); - addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG; - } - list_add_tail(&addr->entry, card->ip_tbd_list); - } - return 1; - } -} - -/** - * Remove IP address from list - */ -static int -qeth_delete_ip(struct qeth_card *card, struct qeth_ipaddr *addr) -{ - unsigned long flags; - int rc = 0; - - QETH_DBF_TEXT(trace, 4, "delip"); - - if (card->options.layer2) - QETH_DBF_HEX(trace, 4, &addr->mac, 6); - else if (addr->proto == QETH_PROT_IPV4) - QETH_DBF_HEX(trace, 4, &addr->u.a4.addr, 4); - else { - QETH_DBF_HEX(trace, 4, &addr->u.a6.addr, 8); - QETH_DBF_HEX(trace, 4, ((char *)&addr->u.a6.addr) + 8, 8); - } - spin_lock_irqsave(&card->ip_lock, flags); - rc = __qeth_insert_ip_todo(card, addr, 0); - spin_unlock_irqrestore(&card->ip_lock, flags); - return rc; -} - -static int -qeth_add_ip(struct qeth_card *card, struct qeth_ipaddr *addr) -{ - unsigned long flags; - int rc = 0; - - QETH_DBF_TEXT(trace, 4, "addip"); - if (card->options.layer2) - QETH_DBF_HEX(trace, 4, &addr->mac, 6); - else if (addr->proto == QETH_PROT_IPV4) - QETH_DBF_HEX(trace, 4, &addr->u.a4.addr, 4); - else { - QETH_DBF_HEX(trace, 4, &addr->u.a6.addr, 8); - QETH_DBF_HEX(trace, 4, ((char *)&addr->u.a6.addr) + 8, 8); - } - spin_lock_irqsave(&card->ip_lock, flags); - rc = __qeth_insert_ip_todo(card, addr, 1); - spin_unlock_irqrestore(&card->ip_lock, flags); - return rc; -} - -static void -__qeth_delete_all_mc(struct qeth_card *card, unsigned long *flags) -{ - struct qeth_ipaddr *addr, *tmp; - int rc; -again: - list_for_each_entry_safe(addr, tmp, &card->ip_list, entry) { - if (addr->is_multicast) { - list_del(&addr->entry); - spin_unlock_irqrestore(&card->ip_lock, *flags); - rc = qeth_deregister_addr_entry(card, addr); - spin_lock_irqsave(&card->ip_lock, *flags); - if (!rc) { - kfree(addr); - goto again; - } else - list_add(&addr->entry, &card->ip_list); - } - } -} - -static void -qeth_set_ip_addr_list(struct qeth_card *card) -{ - struct list_head *tbd_list; - struct qeth_ipaddr *todo, *addr; - unsigned long flags; - int rc; - - QETH_DBF_TEXT(trace, 2, "sdiplist"); - QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); - - spin_lock_irqsave(&card->ip_lock, flags); - tbd_list = card->ip_tbd_list; - card->ip_tbd_list = kmalloc(sizeof(struct list_head), GFP_ATOMIC); - if (!card->ip_tbd_list) { - QETH_DBF_TEXT(trace, 0, "silnomem"); - card->ip_tbd_list = tbd_list; - spin_unlock_irqrestore(&card->ip_lock, flags); - return; - } else - INIT_LIST_HEAD(card->ip_tbd_list); - - while (!list_empty(tbd_list)){ - todo = list_entry(tbd_list->next, struct qeth_ipaddr, entry); - list_del(&todo->entry); - if (todo->type == QETH_IP_TYPE_DEL_ALL_MC){ - __qeth_delete_all_mc(card, &flags); - kfree(todo); - continue; - } - rc = __qeth_ref_ip_on_card(card, todo, &addr); - if (rc == 0) { - /* nothing to be done; only adjusted refcount */ - kfree(todo); - } else if (rc == 1) { - /* new entry to be added to on-card list */ - spin_unlock_irqrestore(&card->ip_lock, flags); - rc = qeth_register_addr_entry(card, todo); - spin_lock_irqsave(&card->ip_lock, flags); - if (!rc) - list_add_tail(&todo->entry, &card->ip_list); - else - kfree(todo); - } else if (rc == -1) { - /* on-card entry to be removed */ - list_del_init(&addr->entry); - spin_unlock_irqrestore(&card->ip_lock, flags); - rc = qeth_deregister_addr_entry(card, addr); - spin_lock_irqsave(&card->ip_lock, flags); - if (!rc) - kfree(addr); - else - list_add_tail(&addr->entry, &card->ip_list); - kfree(todo); - } - } - spin_unlock_irqrestore(&card->ip_lock, flags); - kfree(tbd_list); -} - -static void qeth_delete_mc_addresses(struct qeth_card *); -static void qeth_add_multicast_ipv4(struct qeth_card *); -static void qeth_layer2_add_multicast(struct qeth_card *); -#ifdef CONFIG_QETH_IPV6 -static void qeth_add_multicast_ipv6(struct qeth_card *); -#endif - -static int -qeth_set_thread_start_bit(struct qeth_card *card, unsigned long thread) -{ - unsigned long flags; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - if ( !(card->thread_allowed_mask & thread) || - (card->thread_start_mask & thread) ) { - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - return -EPERM; - } - card->thread_start_mask |= thread; - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - return 0; -} - -static void -qeth_clear_thread_start_bit(struct qeth_card *card, unsigned long thread) -{ - unsigned long flags; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - card->thread_start_mask &= ~thread; - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - wake_up(&card->wait_q); -} - -static void -qeth_clear_thread_running_bit(struct qeth_card *card, unsigned long thread) -{ - unsigned long flags; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - card->thread_running_mask &= ~thread; - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - wake_up(&card->wait_q); -} - -static int -__qeth_do_run_thread(struct qeth_card *card, unsigned long thread) -{ - unsigned long flags; - int rc = 0; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - if (card->thread_start_mask & thread){ - if ((card->thread_allowed_mask & thread) && - !(card->thread_running_mask & thread)){ - rc = 1; - card->thread_start_mask &= ~thread; - card->thread_running_mask |= thread; - } else - rc = -EPERM; - } - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - return rc; -} - -static int -qeth_do_run_thread(struct qeth_card *card, unsigned long thread) -{ - int rc = 0; - - wait_event(card->wait_q, - (rc = __qeth_do_run_thread(card, thread)) >= 0); - return rc; -} - -static int -qeth_recover(void *ptr) -{ - struct qeth_card *card; - int rc = 0; - - card = (struct qeth_card *) ptr; - daemonize("qeth_recover"); - QETH_DBF_TEXT(trace,2,"recover1"); - QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); - if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD)) - return 0; - QETH_DBF_TEXT(trace,2,"recover2"); - PRINT_WARN("Recovery of device %s started ...\n", - CARD_BUS_ID(card)); - card->use_hard_stop = 1; - __qeth_set_offline(card->gdev,1); - rc = __qeth_set_online(card->gdev,1); - /* don't run another scheduled recovery */ - qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD); - qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD); - if (!rc) - PRINT_INFO("Device %s successfully recovered!\n", - CARD_BUS_ID(card)); - else - PRINT_INFO("Device %s could not be recovered!\n", - CARD_BUS_ID(card)); - return 0; -} - -void -qeth_schedule_recovery(struct qeth_card *card) -{ - QETH_DBF_TEXT(trace,2,"startrec"); - if (qeth_set_thread_start_bit(card, QETH_RECOVER_THREAD) == 0) - schedule_work(&card->kernel_thread_starter); -} - -static int -qeth_do_start_thread(struct qeth_card *card, unsigned long thread) -{ - unsigned long flags; - int rc = 0; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - QETH_DBF_TEXT_(trace, 4, " %02x%02x%02x", - (u8) card->thread_start_mask, - (u8) card->thread_allowed_mask, - (u8) card->thread_running_mask); - rc = (card->thread_start_mask & thread); - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - return rc; -} - -static void -qeth_start_kernel_thread(struct work_struct *work) -{ - struct qeth_card *card = container_of(work, struct qeth_card, kernel_thread_starter); - QETH_DBF_TEXT(trace , 2, "strthrd"); - - if (card->read.state != CH_STATE_UP && - card->write.state != CH_STATE_UP) - return; - if (qeth_do_start_thread(card, QETH_RECOVER_THREAD)) - kernel_thread(qeth_recover, (void *) card, SIGCHLD); -} - - -static void -qeth_set_intial_options(struct qeth_card *card) -{ - card->options.route4.type = NO_ROUTER; -#ifdef CONFIG_QETH_IPV6 - card->options.route6.type = NO_ROUTER; -#endif /* QETH_IPV6 */ - card->options.checksum_type = QETH_CHECKSUM_DEFAULT; - card->options.broadcast_mode = QETH_TR_BROADCAST_ALLRINGS; - card->options.macaddr_mode = QETH_TR_MACADDR_NONCANONICAL; - card->options.fake_broadcast = 0; - card->options.add_hhlen = DEFAULT_ADD_HHLEN; - card->options.fake_ll = 0; - if (card->info.type == QETH_CARD_TYPE_OSN) - card->options.layer2 = 1; - else - card->options.layer2 = 0; - card->options.performance_stats = 0; - card->options.rx_sg_cb = QETH_RX_SG_CB; -} - -/** - * initialize channels ,card and all state machines - */ -static int -qeth_setup_card(struct qeth_card *card) -{ - - QETH_DBF_TEXT(setup, 2, "setupcrd"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); - - card->read.state = CH_STATE_DOWN; - card->write.state = CH_STATE_DOWN; - card->data.state = CH_STATE_DOWN; - card->state = CARD_STATE_DOWN; - card->lan_online = 0; - card->use_hard_stop = 0; - card->dev = NULL; -#ifdef CONFIG_QETH_VLAN - spin_lock_init(&card->vlanlock); - card->vlangrp = NULL; -#endif - spin_lock_init(&card->lock); - spin_lock_init(&card->ip_lock); - spin_lock_init(&card->thread_mask_lock); - card->thread_start_mask = 0; - card->thread_allowed_mask = 0; - card->thread_running_mask = 0; - INIT_WORK(&card->kernel_thread_starter, qeth_start_kernel_thread); - INIT_LIST_HEAD(&card->ip_list); - card->ip_tbd_list = kmalloc(sizeof(struct list_head), GFP_KERNEL); - if (!card->ip_tbd_list) { - QETH_DBF_TEXT(setup, 0, "iptbdnom"); - return -ENOMEM; - } - INIT_LIST_HEAD(card->ip_tbd_list); - INIT_LIST_HEAD(&card->cmd_waiter_list); - init_waitqueue_head(&card->wait_q); - /* intial options */ - qeth_set_intial_options(card); - /* IP address takeover */ - INIT_LIST_HEAD(&card->ipato.entries); - card->ipato.enabled = 0; - card->ipato.invert4 = 0; - card->ipato.invert6 = 0; - /* init QDIO stuff */ - qeth_init_qdio_info(card); - return 0; -} - -static int -is_1920_device (struct qeth_card *card) -{ - int single_queue = 0; - struct ccw_device *ccwdev; - struct channelPath_dsc { - u8 flags; - u8 lsn; - u8 desc; - u8 chpid; - u8 swla; - u8 zeroes; - u8 chla; - u8 chpp; - } *chp_dsc; - - QETH_DBF_TEXT(setup, 2, "chk_1920"); - - ccwdev = card->data.ccwdev; - chp_dsc = (struct channelPath_dsc *)ccw_device_get_chp_desc(ccwdev, 0); - if (chp_dsc != NULL) { - /* CHPP field bit 6 == 1 -> single queue */ - single_queue = ((chp_dsc->chpp & 0x02) == 0x02); - kfree(chp_dsc); - } - QETH_DBF_TEXT_(setup, 2, "rc:%x", single_queue); - return single_queue; -} - -static int -qeth_determine_card_type(struct qeth_card *card) -{ - int i = 0; - - QETH_DBF_TEXT(setup, 2, "detcdtyp"); - - card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT; - card->qdio.default_out_queue = QETH_DEFAULT_QUEUE; - while (known_devices[i][4]) { - if ((CARD_RDEV(card)->id.dev_type == known_devices[i][2]) && - (CARD_RDEV(card)->id.dev_model == known_devices[i][3])) { - card->info.type = known_devices[i][4]; - card->qdio.no_out_queues = known_devices[i][8]; - card->info.is_multicast_different = known_devices[i][9]; - if (is_1920_device(card)) { - PRINT_INFO("Priority Queueing not able " - "due to hardware limitations!\n"); - card->qdio.no_out_queues = 1; - card->qdio.default_out_queue = 0; - } - return 0; - } - i++; - } - card->info.type = QETH_CARD_TYPE_UNKNOWN; - PRINT_ERR("unknown card type on device %s\n", CARD_BUS_ID(card)); - return -ENOENT; -} - -static int -qeth_probe_device(struct ccwgroup_device *gdev) -{ - struct qeth_card *card; - struct device *dev; - unsigned long flags; - int rc; - - QETH_DBF_TEXT(setup, 2, "probedev"); - - dev = &gdev->dev; - if (!get_device(dev)) - return -ENODEV; - - QETH_DBF_TEXT_(setup, 2, "%s", gdev->dev.bus_id); - - card = qeth_alloc_card(); - if (!card) { - put_device(dev); - QETH_DBF_TEXT_(setup, 2, "1err%d", -ENOMEM); - return -ENOMEM; - } - card->read.ccwdev = gdev->cdev[0]; - card->write.ccwdev = gdev->cdev[1]; - card->data.ccwdev = gdev->cdev[2]; - gdev->dev.driver_data = card; - card->gdev = gdev; - gdev->cdev[0]->handler = qeth_irq; - gdev->cdev[1]->handler = qeth_irq; - gdev->cdev[2]->handler = qeth_irq; - - if ((rc = qeth_determine_card_type(card))){ - PRINT_WARN("%s: not a valid card type\n", __func__); - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); - put_device(dev); - qeth_free_card(card); - return rc; - } - if ((rc = qeth_setup_card(card))){ - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - put_device(dev); - qeth_free_card(card); - return rc; - } - rc = qeth_create_device_attributes(dev); - if (rc) { - put_device(dev); - qeth_free_card(card); - return rc; - } - /* insert into our internal list */ - write_lock_irqsave(&qeth_card_list.rwlock, flags); - list_add_tail(&card->list, &qeth_card_list.list); - write_unlock_irqrestore(&qeth_card_list.rwlock, flags); - return rc; -} - - -static int qeth_read_conf_data(struct qeth_card *card, void **buffer, - int *length) -{ - struct ciw *ciw; - char *rcd_buf; - int ret; - struct qeth_channel *channel = &card->data; - unsigned long flags; - - /* - * scan for RCD command in extended SenseID data - */ - ciw = ccw_device_get_ciw(channel->ccwdev, CIW_TYPE_RCD); - if (!ciw || ciw->cmd == 0) - return -EOPNOTSUPP; - rcd_buf = kzalloc(ciw->count, GFP_KERNEL | GFP_DMA); - if (!rcd_buf) - return -ENOMEM; - - channel->ccw.cmd_code = ciw->cmd; - channel->ccw.cda = (__u32) __pa (rcd_buf); - channel->ccw.count = ciw->count; - channel->ccw.flags = CCW_FLAG_SLI; - channel->state = CH_STATE_RCD; - spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); - ret = ccw_device_start_timeout(channel->ccwdev, &channel->ccw, - QETH_RCD_PARM, LPM_ANYPATH, 0, - QETH_RCD_TIMEOUT); - spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); - if (!ret) - wait_event(card->wait_q, - (channel->state == CH_STATE_RCD_DONE || - channel->state == CH_STATE_DOWN)); - if (channel->state == CH_STATE_DOWN) - ret = -EIO; - else - channel->state = CH_STATE_DOWN; - if (ret) { - kfree(rcd_buf); - *buffer = NULL; - *length = 0; - } else { - *length = ciw->count; - *buffer = rcd_buf; - } - return ret; -} - -static int -qeth_get_unitaddr(struct qeth_card *card) -{ - int length; - char *prcd; - int rc; - - QETH_DBF_TEXT(setup, 2, "getunit"); - rc = qeth_read_conf_data(card, (void **) &prcd, &length); - if (rc) { - PRINT_ERR("qeth_read_conf_data for device %s returned %i\n", - CARD_DDEV_ID(card), rc); - return rc; - } - card->info.chpid = prcd[30]; - card->info.unit_addr2 = prcd[31]; - card->info.cula = prcd[63]; - card->info.guestlan = ((prcd[0x10] == _ascebc['V']) && - (prcd[0x11] == _ascebc['M'])); - kfree(prcd); - return 0; -} - -static void -qeth_init_tokens(struct qeth_card *card) -{ - card->token.issuer_rm_w = 0x00010103UL; - card->token.cm_filter_w = 0x00010108UL; - card->token.cm_connection_w = 0x0001010aUL; - card->token.ulp_filter_w = 0x0001010bUL; - card->token.ulp_connection_w = 0x0001010dUL; -} - -static inline __u16 -raw_devno_from_bus_id(char *id) -{ - id += (strlen(id) - 4); - return (__u16) simple_strtoul(id, &id, 16); -} -/** - * setup channel - */ -static void -qeth_setup_ccw(struct qeth_channel *channel,unsigned char *iob, __u32 len) -{ - struct qeth_card *card; - - QETH_DBF_TEXT(trace, 4, "setupccw"); - card = CARD_FROM_CDEV(channel->ccwdev); - if (channel == &card->read) - memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1)); - else - memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1)); - channel->ccw.count = len; - channel->ccw.cda = (__u32) __pa(iob); -} - -/** - * get free buffer for ccws (IDX activation, lancmds,ipassists...) - */ -static struct qeth_cmd_buffer * -__qeth_get_buffer(struct qeth_channel *channel) -{ - __u8 index; - - QETH_DBF_TEXT(trace, 6, "getbuff"); - index = channel->io_buf_no; - do { - if (channel->iob[index].state == BUF_STATE_FREE) { - channel->iob[index].state = BUF_STATE_LOCKED; - channel->io_buf_no = (channel->io_buf_no + 1) % - QETH_CMD_BUFFER_NO; - memset(channel->iob[index].data, 0, QETH_BUFSIZE); - return channel->iob + index; - } - index = (index + 1) % QETH_CMD_BUFFER_NO; - } while(index != channel->io_buf_no); - - return NULL; -} - -/** - * release command buffer - */ -static void -qeth_release_buffer(struct qeth_channel *channel, struct qeth_cmd_buffer *iob) -{ - unsigned long flags; - - QETH_DBF_TEXT(trace, 6, "relbuff"); - spin_lock_irqsave(&channel->iob_lock, flags); - memset(iob->data, 0, QETH_BUFSIZE); - iob->state = BUF_STATE_FREE; - iob->callback = qeth_send_control_data_cb; - iob->rc = 0; - spin_unlock_irqrestore(&channel->iob_lock, flags); -} - -static struct qeth_cmd_buffer * -qeth_get_buffer(struct qeth_channel *channel) -{ - struct qeth_cmd_buffer *buffer = NULL; - unsigned long flags; - - spin_lock_irqsave(&channel->iob_lock, flags); - buffer = __qeth_get_buffer(channel); - spin_unlock_irqrestore(&channel->iob_lock, flags); - return buffer; -} - -static struct qeth_cmd_buffer * -qeth_wait_for_buffer(struct qeth_channel *channel) -{ - struct qeth_cmd_buffer *buffer; - wait_event(channel->wait_q, - ((buffer = qeth_get_buffer(channel)) != NULL)); - return buffer; -} - -static void -qeth_clear_cmd_buffers(struct qeth_channel *channel) -{ - int cnt; - - for (cnt=0; cnt < QETH_CMD_BUFFER_NO; cnt++) - qeth_release_buffer(channel,&channel->iob[cnt]); - channel->buf_no = 0; - channel->io_buf_no = 0; -} - -/** - * start IDX for read and write channel - */ -static int -qeth_idx_activate_get_answer(struct qeth_channel *channel, - void (*idx_reply_cb)(struct qeth_channel *, - struct qeth_cmd_buffer *)) -{ - struct qeth_cmd_buffer *iob; - unsigned long flags; - int rc; - struct qeth_card *card; - - QETH_DBF_TEXT(setup, 2, "idxanswr"); - card = CARD_FROM_CDEV(channel->ccwdev); - iob = qeth_get_buffer(channel); - iob->callback = idx_reply_cb; - memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1)); - channel->ccw.count = QETH_BUFSIZE; - channel->ccw.cda = (__u32) __pa(iob->data); - - wait_event(card->wait_q, - atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); - QETH_DBF_TEXT(setup, 6, "noirqpnd"); - spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); - rc = ccw_device_start(channel->ccwdev, - &channel->ccw,(addr_t) iob, 0, 0); - spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); - - if (rc) { - PRINT_ERR("qeth: Error2 in activating channel rc=%d\n",rc); - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - atomic_set(&channel->irq_pending, 0); - wake_up(&card->wait_q); - return rc; - } - rc = wait_event_interruptible_timeout(card->wait_q, - channel->state == CH_STATE_UP, QETH_TIMEOUT); - if (rc == -ERESTARTSYS) - return rc; - if (channel->state != CH_STATE_UP){ - rc = -ETIME; - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); - qeth_clear_cmd_buffers(channel); - } else - rc = 0; - return rc; -} - -static int -qeth_idx_activate_channel(struct qeth_channel *channel, - void (*idx_reply_cb)(struct qeth_channel *, - struct qeth_cmd_buffer *)) -{ - struct qeth_card *card; - struct qeth_cmd_buffer *iob; - unsigned long flags; - __u16 temp; - int rc; - - card = CARD_FROM_CDEV(channel->ccwdev); - - QETH_DBF_TEXT(setup, 2, "idxactch"); - - iob = qeth_get_buffer(channel); - iob->callback = idx_reply_cb; - memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1)); - channel->ccw.count = IDX_ACTIVATE_SIZE; - channel->ccw.cda = (__u32) __pa(iob->data); - if (channel == &card->write) { - memcpy(iob->data, IDX_ACTIVATE_WRITE, IDX_ACTIVATE_SIZE); - memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), - &card->seqno.trans_hdr, QETH_SEQ_NO_LENGTH); - card->seqno.trans_hdr++; - } else { - memcpy(iob->data, IDX_ACTIVATE_READ, IDX_ACTIVATE_SIZE); - memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), - &card->seqno.trans_hdr, QETH_SEQ_NO_LENGTH); - } - memcpy(QETH_IDX_ACT_ISSUER_RM_TOKEN(iob->data), - &card->token.issuer_rm_w,QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_IDX_ACT_FUNC_LEVEL(iob->data), - &card->info.func_level,sizeof(__u16)); - temp = raw_devno_from_bus_id(CARD_DDEV_ID(card)); - memcpy(QETH_IDX_ACT_QDIO_DEV_CUA(iob->data), &temp, 2); - temp = (card->info.cula << 8) + card->info.unit_addr2; - memcpy(QETH_IDX_ACT_QDIO_DEV_REALADDR(iob->data), &temp, 2); - - wait_event(card->wait_q, - atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); - QETH_DBF_TEXT(setup, 6, "noirqpnd"); - spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); - rc = ccw_device_start(channel->ccwdev, - &channel->ccw,(addr_t) iob, 0, 0); - spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); - - if (rc) { - PRINT_ERR("qeth: Error1 in activating channel. rc=%d\n",rc); - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - atomic_set(&channel->irq_pending, 0); - wake_up(&card->wait_q); - return rc; - } - rc = wait_event_interruptible_timeout(card->wait_q, - channel->state == CH_STATE_ACTIVATING, QETH_TIMEOUT); - if (rc == -ERESTARTSYS) - return rc; - if (channel->state != CH_STATE_ACTIVATING) { - PRINT_WARN("qeth: IDX activate timed out!\n"); - QETH_DBF_TEXT_(setup, 2, "2err%d", -ETIME); - qeth_clear_cmd_buffers(channel); - return -ETIME; - } - return qeth_idx_activate_get_answer(channel,idx_reply_cb); -} - -static int -qeth_peer_func_level(int level) -{ - if ((level & 0xff) == 8) - return (level & 0xff) + 0x400; - if (((level >> 8) & 3) == 1) - return (level & 0xff) + 0x200; - return level; -} - -static void -qeth_idx_write_cb(struct qeth_channel *channel, struct qeth_cmd_buffer *iob) -{ - struct qeth_card *card; - __u16 temp; - - QETH_DBF_TEXT(setup ,2, "idxwrcb"); - - if (channel->state == CH_STATE_DOWN) { - channel->state = CH_STATE_ACTIVATING; - goto out; - } - card = CARD_FROM_CDEV(channel->ccwdev); - - if (!(QETH_IS_IDX_ACT_POS_REPLY(iob->data))) { - if (QETH_IDX_ACT_CAUSE_CODE(iob->data) == 0x19) - PRINT_ERR("IDX_ACTIVATE on write channel device %s: " - "adapter exclusively used by another host\n", - CARD_WDEV_ID(card)); - else - PRINT_ERR("IDX_ACTIVATE on write channel device %s: " - "negative reply\n", CARD_WDEV_ID(card)); - goto out; - } - memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2); - if ((temp & ~0x0100) != qeth_peer_func_level(card->info.func_level)) { - PRINT_WARN("IDX_ACTIVATE on write channel device %s: " - "function level mismatch " - "(sent: 0x%x, received: 0x%x)\n", - CARD_WDEV_ID(card), card->info.func_level, temp); - goto out; - } - channel->state = CH_STATE_UP; -out: - qeth_release_buffer(channel, iob); -} - -static int -qeth_check_idx_response(unsigned char *buffer) -{ - if (!buffer) - return 0; - - QETH_DBF_HEX(control, 2, buffer, QETH_DBF_CONTROL_LEN); - if ((buffer[2] & 0xc0) == 0xc0) { - PRINT_WARN("received an IDX TERMINATE " - "with cause code 0x%02x%s\n", - buffer[4], - ((buffer[4] == 0x22) ? - " -- try another portname" : "")); - QETH_DBF_TEXT(trace, 2, "ckidxres"); - QETH_DBF_TEXT(trace, 2, " idxterm"); - QETH_DBF_TEXT_(trace, 2, " rc%d", -EIO); - return -EIO; - } - return 0; -} - -static void -qeth_idx_read_cb(struct qeth_channel *channel, struct qeth_cmd_buffer *iob) -{ - struct qeth_card *card; - __u16 temp; - - QETH_DBF_TEXT(setup , 2, "idxrdcb"); - if (channel->state == CH_STATE_DOWN) { - channel->state = CH_STATE_ACTIVATING; - goto out; - } - - card = CARD_FROM_CDEV(channel->ccwdev); - if (qeth_check_idx_response(iob->data)) { - goto out; - } - if (!(QETH_IS_IDX_ACT_POS_REPLY(iob->data))) { - if (QETH_IDX_ACT_CAUSE_CODE(iob->data) == 0x19) - PRINT_ERR("IDX_ACTIVATE on read channel device %s: " - "adapter exclusively used by another host\n", - CARD_RDEV_ID(card)); - else - PRINT_ERR("IDX_ACTIVATE on read channel device %s: " - "negative reply\n", CARD_RDEV_ID(card)); - goto out; - } - -/** - * temporary fix for microcode bug - * to revert it,replace OR by AND - */ - if ( (!QETH_IDX_NO_PORTNAME_REQUIRED(iob->data)) || - (card->info.type == QETH_CARD_TYPE_OSAE) ) - card->info.portname_required = 1; - - memcpy(&temp, QETH_IDX_ACT_FUNC_LEVEL(iob->data), 2); - if (temp != qeth_peer_func_level(card->info.func_level)) { - PRINT_WARN("IDX_ACTIVATE on read channel device %s: function " - "level mismatch (sent: 0x%x, received: 0x%x)\n", - CARD_RDEV_ID(card), card->info.func_level, temp); - goto out; - } - memcpy(&card->token.issuer_rm_r, - QETH_IDX_ACT_ISSUER_RM_TOKEN(iob->data), - QETH_MPC_TOKEN_LENGTH); - memcpy(&card->info.mcl_level[0], - QETH_IDX_REPLY_LEVEL(iob->data), QETH_MCL_LENGTH); - channel->state = CH_STATE_UP; -out: - qeth_release_buffer(channel,iob); -} - -static int -qeth_issue_next_read(struct qeth_card *card) -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(trace,5,"issnxrd"); - if (card->read.state != CH_STATE_UP) - return -EIO; - iob = qeth_get_buffer(&card->read); - if (!iob) { - PRINT_WARN("issue_next_read failed: no iob available!\n"); - return -ENOMEM; - } - qeth_setup_ccw(&card->read, iob->data, QETH_BUFSIZE); - QETH_DBF_TEXT(trace, 6, "noirqpnd"); - rc = ccw_device_start(card->read.ccwdev, &card->read.ccw, - (addr_t) iob, 0, 0); - if (rc) { - PRINT_ERR("Error in starting next read ccw! rc=%i\n", rc); - atomic_set(&card->read.irq_pending, 0); - qeth_schedule_recovery(card); - wake_up(&card->wait_q); - } - return rc; -} - -static struct qeth_reply * -qeth_alloc_reply(struct qeth_card *card) -{ - struct qeth_reply *reply; - - reply = kzalloc(sizeof(struct qeth_reply), GFP_ATOMIC); - if (reply){ - atomic_set(&reply->refcnt, 1); - atomic_set(&reply->received, 0); - reply->card = card; - }; - return reply; -} - -static void -qeth_get_reply(struct qeth_reply *reply) -{ - WARN_ON(atomic_read(&reply->refcnt) <= 0); - atomic_inc(&reply->refcnt); -} - -static void -qeth_put_reply(struct qeth_reply *reply) -{ - WARN_ON(atomic_read(&reply->refcnt) <= 0); - if (atomic_dec_and_test(&reply->refcnt)) - kfree(reply); -} - -static void -qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, struct qeth_card *card) -{ - int rc; - int com; - char * ipa_name; - - com = cmd->hdr.command; - rc = cmd->hdr.return_code; - ipa_name = qeth_get_ipa_cmd_name(com); - - PRINT_ERR("%s(x%X) for %s returned x%X \"%s\"\n", ipa_name, com, - QETH_CARD_IFNAME(card), rc, qeth_get_ipa_msg(rc)); -} - -static struct qeth_ipa_cmd * -qeth_check_ipa_data(struct qeth_card *card, struct qeth_cmd_buffer *iob) -{ - struct qeth_ipa_cmd *cmd = NULL; - - QETH_DBF_TEXT(trace,5,"chkipad"); - if (IS_IPA(iob->data)){ - cmd = (struct qeth_ipa_cmd *) PDU_ENCAPSULATION(iob->data); - if (IS_IPA_REPLY(cmd)) { - if (cmd->hdr.return_code) - qeth_issue_ipa_msg(cmd, card); - return cmd; - } - else { - switch (cmd->hdr.command) { - case IPA_CMD_STOPLAN: - PRINT_WARN("Link failure on %s (CHPID 0x%X) - " - "there is a network problem or " - "someone pulled the cable or " - "disabled the port.\n", - QETH_CARD_IFNAME(card), - card->info.chpid); - card->lan_online = 0; - if (card->dev && netif_carrier_ok(card->dev)) - netif_carrier_off(card->dev); - return NULL; - case IPA_CMD_STARTLAN: - PRINT_INFO("Link reestablished on %s " - "(CHPID 0x%X). Scheduling " - "IP address reset.\n", - QETH_CARD_IFNAME(card), - card->info.chpid); - netif_carrier_on(card->dev); - qeth_schedule_recovery(card); - return NULL; - case IPA_CMD_MODCCID: - return cmd; - case IPA_CMD_REGISTER_LOCAL_ADDR: - QETH_DBF_TEXT(trace,3, "irla"); - break; - case IPA_CMD_UNREGISTER_LOCAL_ADDR: - QETH_DBF_TEXT(trace,3, "urla"); - break; - default: - PRINT_WARN("Received data is IPA " - "but not a reply!\n"); - break; - } - } - } - return cmd; -} - -/** - * wake all waiting ipa commands - */ -static void -qeth_clear_ipacmd_list(struct qeth_card *card) -{ - struct qeth_reply *reply, *r; - unsigned long flags; - - QETH_DBF_TEXT(trace, 4, "clipalst"); - - spin_lock_irqsave(&card->lock, flags); - list_for_each_entry_safe(reply, r, &card->cmd_waiter_list, list) { - qeth_get_reply(reply); - reply->rc = -EIO; - atomic_inc(&reply->received); - list_del_init(&reply->list); - wake_up(&reply->wait_q); - qeth_put_reply(reply); - } - spin_unlock_irqrestore(&card->lock, flags); -} - -static void -qeth_send_control_data_cb(struct qeth_channel *channel, - struct qeth_cmd_buffer *iob) -{ - struct qeth_card *card; - struct qeth_reply *reply, *r; - struct qeth_ipa_cmd *cmd; - unsigned long flags; - int keep_reply; - - QETH_DBF_TEXT(trace,4,"sndctlcb"); - - card = CARD_FROM_CDEV(channel->ccwdev); - if (qeth_check_idx_response(iob->data)) { - qeth_clear_ipacmd_list(card); - qeth_schedule_recovery(card); - goto out; - } - - cmd = qeth_check_ipa_data(card, iob); - if ((cmd == NULL) && (card->state != CARD_STATE_DOWN)) - goto out; - /*in case of OSN : check if cmd is set */ - if (card->info.type == QETH_CARD_TYPE_OSN && - cmd && - cmd->hdr.command != IPA_CMD_STARTLAN && - card->osn_info.assist_cb != NULL) { - card->osn_info.assist_cb(card->dev, cmd); - goto out; - } - - spin_lock_irqsave(&card->lock, flags); - list_for_each_entry_safe(reply, r, &card->cmd_waiter_list, list) { - if ((reply->seqno == QETH_IDX_COMMAND_SEQNO) || - ((cmd) && (reply->seqno == cmd->hdr.seqno))) { - qeth_get_reply(reply); - list_del_init(&reply->list); - spin_unlock_irqrestore(&card->lock, flags); - keep_reply = 0; - if (reply->callback != NULL) { - if (cmd) { - reply->offset = (__u16)((char*)cmd - - (char *)iob->data); - keep_reply = reply->callback(card, - reply, - (unsigned long)cmd); - } else - keep_reply = reply->callback(card, - reply, - (unsigned long)iob); - } - if (cmd) - reply->rc = (u16) cmd->hdr.return_code; - else if (iob->rc) - reply->rc = iob->rc; - if (keep_reply) { - spin_lock_irqsave(&card->lock, flags); - list_add_tail(&reply->list, - &card->cmd_waiter_list); - spin_unlock_irqrestore(&card->lock, flags); - } else { - atomic_inc(&reply->received); - wake_up(&reply->wait_q); - } - qeth_put_reply(reply); - goto out; - } - } - spin_unlock_irqrestore(&card->lock, flags); -out: - memcpy(&card->seqno.pdu_hdr_ack, - QETH_PDU_HEADER_SEQ_NO(iob->data), - QETH_SEQ_NO_LENGTH); - qeth_release_buffer(channel,iob); -} - -static void -qeth_prepare_control_data(struct qeth_card *card, int len, - struct qeth_cmd_buffer *iob) -{ - qeth_setup_ccw(&card->write,iob->data,len); - iob->callback = qeth_release_buffer; - - memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), - &card->seqno.trans_hdr, QETH_SEQ_NO_LENGTH); - card->seqno.trans_hdr++; - memcpy(QETH_PDU_HEADER_SEQ_NO(iob->data), - &card->seqno.pdu_hdr, QETH_SEQ_NO_LENGTH); - card->seqno.pdu_hdr++; - memcpy(QETH_PDU_HEADER_ACK_SEQ_NO(iob->data), - &card->seqno.pdu_hdr_ack, QETH_SEQ_NO_LENGTH); - QETH_DBF_HEX(control, 2, iob->data, QETH_DBF_CONTROL_LEN); -} - -static int -qeth_send_control_data(struct qeth_card *card, int len, - struct qeth_cmd_buffer *iob, - int (*reply_cb) - (struct qeth_card *, struct qeth_reply*, unsigned long), - void *reply_param) - -{ - int rc; - unsigned long flags; - struct qeth_reply *reply = NULL; - unsigned long timeout; - - QETH_DBF_TEXT(trace, 2, "sendctl"); - - reply = qeth_alloc_reply(card); - if (!reply) { - PRINT_WARN("Could no alloc qeth_reply!\n"); - return -ENOMEM; - } - reply->callback = reply_cb; - reply->param = reply_param; - if (card->state == CARD_STATE_DOWN) - reply->seqno = QETH_IDX_COMMAND_SEQNO; - else - reply->seqno = card->seqno.ipa++; - init_waitqueue_head(&reply->wait_q); - spin_lock_irqsave(&card->lock, flags); - list_add_tail(&reply->list, &card->cmd_waiter_list); - spin_unlock_irqrestore(&card->lock, flags); - QETH_DBF_HEX(control, 2, iob->data, QETH_DBF_CONTROL_LEN); - - while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ; - qeth_prepare_control_data(card, len, iob); - - if (IS_IPA(iob->data)) - timeout = jiffies + QETH_IPA_TIMEOUT; - else - timeout = jiffies + QETH_TIMEOUT; - - QETH_DBF_TEXT(trace, 6, "noirqpnd"); - spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); - rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, - (addr_t) iob, 0, 0); - spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags); - if (rc){ - PRINT_WARN("qeth_send_control_data: " - "ccw_device_start rc = %i\n", rc); - QETH_DBF_TEXT_(trace, 2, " err%d", rc); - spin_lock_irqsave(&card->lock, flags); - list_del_init(&reply->list); - qeth_put_reply(reply); - spin_unlock_irqrestore(&card->lock, flags); - qeth_release_buffer(iob->channel, iob); - atomic_set(&card->write.irq_pending, 0); - wake_up(&card->wait_q); - return rc; - } - while (!atomic_read(&reply->received)) { - if (time_after(jiffies, timeout)) { - spin_lock_irqsave(&reply->card->lock, flags); - list_del_init(&reply->list); - spin_unlock_irqrestore(&reply->card->lock, flags); - reply->rc = -ETIME; - atomic_inc(&reply->received); - wake_up(&reply->wait_q); - } - cpu_relax(); - }; - rc = reply->rc; - qeth_put_reply(reply); - return rc; -} - -static int -qeth_osn_send_control_data(struct qeth_card *card, int len, - struct qeth_cmd_buffer *iob) -{ - unsigned long flags; - int rc = 0; - - QETH_DBF_TEXT(trace, 5, "osndctrd"); - - wait_event(card->wait_q, - atomic_cmpxchg(&card->write.irq_pending, 0, 1) == 0); - qeth_prepare_control_data(card, len, iob); - QETH_DBF_TEXT(trace, 6, "osnoirqp"); - spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); - rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, - (addr_t) iob, 0, 0); - spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags); - if (rc){ - PRINT_WARN("qeth_osn_send_control_data: " - "ccw_device_start rc = %i\n", rc); - QETH_DBF_TEXT_(trace, 2, " err%d", rc); - qeth_release_buffer(iob->channel, iob); - atomic_set(&card->write.irq_pending, 0); - wake_up(&card->wait_q); - } - return rc; -} - -static inline void -qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, - char prot_type) -{ - memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); - memcpy(QETH_IPA_CMD_PROT_TYPE(iob->data),&prot_type,1); - memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); -} - -static int -qeth_osn_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, - int data_len) -{ - u16 s1, s2; - - QETH_DBF_TEXT(trace,4,"osndipa"); - - qeth_prepare_ipa_cmd(card, iob, QETH_PROT_OSN2); - s1 = (u16)(IPA_PDU_HEADER_SIZE + data_len); - s2 = (u16)data_len; - memcpy(QETH_IPA_PDU_LEN_TOTAL(iob->data), &s1, 2); - memcpy(QETH_IPA_PDU_LEN_PDU1(iob->data), &s2, 2); - memcpy(QETH_IPA_PDU_LEN_PDU2(iob->data), &s2, 2); - memcpy(QETH_IPA_PDU_LEN_PDU3(iob->data), &s2, 2); - return qeth_osn_send_control_data(card, s1, iob); -} - -static int -qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, - int (*reply_cb) - (struct qeth_card *,struct qeth_reply*, unsigned long), - void *reply_param) -{ - int rc; - char prot_type; - - QETH_DBF_TEXT(trace,4,"sendipa"); - - if (card->options.layer2) - if (card->info.type == QETH_CARD_TYPE_OSN) - prot_type = QETH_PROT_OSN2; - else - prot_type = QETH_PROT_LAYER2; - else - prot_type = QETH_PROT_TCPIP; - qeth_prepare_ipa_cmd(card,iob,prot_type); - rc = qeth_send_control_data(card, IPA_CMD_LENGTH, iob, - reply_cb, reply_param); - return rc; -} - - -static int -qeth_cm_enable_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup, 2, "cmenblcb"); - - iob = (struct qeth_cmd_buffer *) data; - memcpy(&card->token.cm_filter_r, - QETH_CM_ENABLE_RESP_FILTER_TOKEN(iob->data), - QETH_MPC_TOKEN_LENGTH); - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); - return 0; -} - -static int -qeth_cm_enable(struct qeth_card *card) -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup,2,"cmenable"); - - iob = qeth_wait_for_buffer(&card->write); - memcpy(iob->data, CM_ENABLE, CM_ENABLE_SIZE); - memcpy(QETH_CM_ENABLE_ISSUER_RM_TOKEN(iob->data), - &card->token.issuer_rm_r, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_CM_ENABLE_FILTER_TOKEN(iob->data), - &card->token.cm_filter_w, QETH_MPC_TOKEN_LENGTH); - - rc = qeth_send_control_data(card, CM_ENABLE_SIZE, iob, - qeth_cm_enable_cb, NULL); - return rc; -} - -static int -qeth_cm_setup_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup, 2, "cmsetpcb"); - - iob = (struct qeth_cmd_buffer *) data; - memcpy(&card->token.cm_connection_r, - QETH_CM_SETUP_RESP_DEST_ADDR(iob->data), - QETH_MPC_TOKEN_LENGTH); - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); - return 0; -} - -static int -qeth_cm_setup(struct qeth_card *card) -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup,2,"cmsetup"); - - iob = qeth_wait_for_buffer(&card->write); - memcpy(iob->data, CM_SETUP, CM_SETUP_SIZE); - memcpy(QETH_CM_SETUP_DEST_ADDR(iob->data), - &card->token.issuer_rm_r, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_CM_SETUP_CONNECTION_TOKEN(iob->data), - &card->token.cm_connection_w, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_CM_SETUP_FILTER_TOKEN(iob->data), - &card->token.cm_filter_r, QETH_MPC_TOKEN_LENGTH); - rc = qeth_send_control_data(card, CM_SETUP_SIZE, iob, - qeth_cm_setup_cb, NULL); - return rc; - -} - -static int -qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - - __u16 mtu, framesize; - __u16 len; - __u8 link_type; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup, 2, "ulpenacb"); - - iob = (struct qeth_cmd_buffer *) data; - memcpy(&card->token.ulp_filter_r, - QETH_ULP_ENABLE_RESP_FILTER_TOKEN(iob->data), - QETH_MPC_TOKEN_LENGTH); - if (qeth_get_mtu_out_of_mpc(card->info.type)) { - memcpy(&framesize, QETH_ULP_ENABLE_RESP_MAX_MTU(iob->data), 2); - mtu = qeth_get_mtu_outof_framesize(framesize); - if (!mtu) { - iob->rc = -EINVAL; - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); - return 0; - } - card->info.max_mtu = mtu; - card->info.initial_mtu = mtu; - card->qdio.in_buf_size = mtu + 2 * PAGE_SIZE; - } else { - card->info.initial_mtu = qeth_get_initial_mtu_for_card(card); - card->info.max_mtu = qeth_get_max_mtu_for_card(card->info.type); - card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT; - } - - memcpy(&len, QETH_ULP_ENABLE_RESP_DIFINFO_LEN(iob->data), 2); - if (len >= QETH_MPC_DIFINFO_LEN_INDICATES_LINK_TYPE) { - memcpy(&link_type, - QETH_ULP_ENABLE_RESP_LINK_TYPE(iob->data), 1); - card->info.link_type = link_type; - } else - card->info.link_type = 0; - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); - return 0; -} - -static int -qeth_ulp_enable(struct qeth_card *card) -{ - int rc; - char prot_type; - struct qeth_cmd_buffer *iob; - - /*FIXME: trace view callbacks*/ - QETH_DBF_TEXT(setup,2,"ulpenabl"); - - iob = qeth_wait_for_buffer(&card->write); - memcpy(iob->data, ULP_ENABLE, ULP_ENABLE_SIZE); - - *(QETH_ULP_ENABLE_LINKNUM(iob->data)) = - (__u8) card->info.portno; - if (card->options.layer2) - if (card->info.type == QETH_CARD_TYPE_OSN) - prot_type = QETH_PROT_OSN2; - else - prot_type = QETH_PROT_LAYER2; - else - prot_type = QETH_PROT_TCPIP; - - memcpy(QETH_ULP_ENABLE_PROT_TYPE(iob->data),&prot_type,1); - memcpy(QETH_ULP_ENABLE_DEST_ADDR(iob->data), - &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_ULP_ENABLE_FILTER_TOKEN(iob->data), - &card->token.ulp_filter_w, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_ULP_ENABLE_PORTNAME_AND_LL(iob->data), - card->info.portname, 9); - rc = qeth_send_control_data(card, ULP_ENABLE_SIZE, iob, - qeth_ulp_enable_cb, NULL); - return rc; - -} - -static int -qeth_ulp_setup_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup, 2, "ulpstpcb"); - - iob = (struct qeth_cmd_buffer *) data; - memcpy(&card->token.ulp_connection_r, - QETH_ULP_SETUP_RESP_CONNECTION_TOKEN(iob->data), - QETH_MPC_TOKEN_LENGTH); - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); - return 0; -} - -static int -qeth_ulp_setup(struct qeth_card *card) -{ - int rc; - __u16 temp; - struct qeth_cmd_buffer *iob; - struct ccw_dev_id dev_id; - - QETH_DBF_TEXT(setup,2,"ulpsetup"); - - iob = qeth_wait_for_buffer(&card->write); - memcpy(iob->data, ULP_SETUP, ULP_SETUP_SIZE); - - memcpy(QETH_ULP_SETUP_DEST_ADDR(iob->data), - &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_ULP_SETUP_CONNECTION_TOKEN(iob->data), - &card->token.ulp_connection_w, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_ULP_SETUP_FILTER_TOKEN(iob->data), - &card->token.ulp_filter_r, QETH_MPC_TOKEN_LENGTH); - - ccw_device_get_id(CARD_DDEV(card), &dev_id); - memcpy(QETH_ULP_SETUP_CUA(iob->data), &dev_id.devno, 2); - temp = (card->info.cula << 8) + card->info.unit_addr2; - memcpy(QETH_ULP_SETUP_REAL_DEVADDR(iob->data), &temp, 2); - rc = qeth_send_control_data(card, ULP_SETUP_SIZE, iob, - qeth_ulp_setup_cb, NULL); - return rc; -} - -static inline int -qeth_check_qdio_errors(struct qdio_buffer *buf, unsigned int qdio_error, - unsigned int siga_error, const char *dbftext) -{ - if (qdio_error || siga_error) { - QETH_DBF_TEXT(trace, 2, dbftext); - QETH_DBF_TEXT(qerr, 2, dbftext); - QETH_DBF_TEXT_(qerr, 2, " F15=%02X", - buf->element[15].flags & 0xff); - QETH_DBF_TEXT_(qerr, 2, " F14=%02X", - buf->element[14].flags & 0xff); - QETH_DBF_TEXT_(qerr, 2, " qerr=%X", qdio_error); - QETH_DBF_TEXT_(qerr, 2, " serr=%X", siga_error); - return 1; - } - return 0; -} - -static struct sk_buff * -qeth_get_skb(unsigned int length, struct qeth_hdr *hdr) -{ - struct sk_buff* skb; - int add_len; - - add_len = 0; - if (hdr->hdr.osn.id == QETH_HEADER_TYPE_OSN) - add_len = sizeof(struct qeth_hdr); -#ifdef CONFIG_QETH_VLAN - else - add_len = VLAN_HLEN; -#endif - skb = dev_alloc_skb(length + add_len); - if (skb && add_len) - skb_reserve(skb, add_len); - return skb; -} - -static inline int -qeth_create_skb_frag(struct qdio_buffer_element *element, - struct sk_buff **pskb, - int offset, int *pfrag, int data_len) -{ - struct page *page = virt_to_page(element->addr); - if (*pfrag == 0) { - /* the upper protocol layers assume that there is data in the - * skb itself. Copy a small amount (64 bytes) to make them - * happy. */ - *pskb = dev_alloc_skb(64 + QETH_FAKE_LL_LEN_ETH); - if (!(*pskb)) - return -ENOMEM; - skb_reserve(*pskb, QETH_FAKE_LL_LEN_ETH); - if (data_len <= 64) { - memcpy(skb_put(*pskb, data_len), element->addr + offset, - data_len); - } else { - get_page(page); - memcpy(skb_put(*pskb, 64), element->addr + offset, 64); - skb_fill_page_desc(*pskb, *pfrag, page, offset + 64, - data_len - 64); - (*pskb)->data_len += data_len - 64; - (*pskb)->len += data_len - 64; - (*pskb)->truesize += data_len - 64; - } - } else { - get_page(page); - skb_fill_page_desc(*pskb, *pfrag, page, offset, data_len); - (*pskb)->data_len += data_len; - (*pskb)->len += data_len; - (*pskb)->truesize += data_len; - } - (*pfrag)++; - return 0; -} - -static inline struct qeth_buffer_pool_entry * -qeth_find_free_buffer_pool_entry(struct qeth_card *card) -{ - struct list_head *plh; - struct qeth_buffer_pool_entry *entry; - int i, free; - struct page *page; - - if (list_empty(&card->qdio.in_buf_pool.entry_list)) - return NULL; - - list_for_each(plh, &card->qdio.in_buf_pool.entry_list) { - entry = list_entry(plh, struct qeth_buffer_pool_entry, list); - free = 1; - for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { - if (page_count(virt_to_page(entry->elements[i])) > 1) { - free = 0; - break; - } - } - if (free) { - list_del_init(&entry->list); - return entry; - } - } - - /* no free buffer in pool so take first one and swap pages */ - entry = list_entry(card->qdio.in_buf_pool.entry_list.next, - struct qeth_buffer_pool_entry, list); - for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { - if (page_count(virt_to_page(entry->elements[i])) > 1) { - page = alloc_page(GFP_ATOMIC|GFP_DMA); - if (!page) { - return NULL; - } else { - free_page((unsigned long)entry->elements[i]); - entry->elements[i] = page_address(page); - if (card->options.performance_stats) - card->perf_stats.sg_alloc_page_rx++; - } - } - } - list_del_init(&entry->list); - return entry; -} - -static struct sk_buff * -qeth_get_next_skb(struct qeth_card *card, struct qdio_buffer *buffer, - struct qdio_buffer_element **__element, int *__offset, - struct qeth_hdr **hdr) -{ - struct qdio_buffer_element *element = *__element; - int offset = *__offset; - struct sk_buff *skb = NULL; - int skb_len; - void *data_ptr; - int data_len; - int use_rx_sg = 0; - int frag = 0; - - QETH_DBF_TEXT(trace,6,"nextskb"); - /* qeth_hdr must not cross element boundaries */ - if (element->length < offset + sizeof(struct qeth_hdr)){ - if (qeth_is_last_sbale(element)) - return NULL; - element++; - offset = 0; - if (element->length < sizeof(struct qeth_hdr)) - return NULL; - } - *hdr = element->addr + offset; - - offset += sizeof(struct qeth_hdr); - if (card->options.layer2) - if (card->info.type == QETH_CARD_TYPE_OSN) - skb_len = (*hdr)->hdr.osn.pdu_length; - else - skb_len = (*hdr)->hdr.l2.pkt_length; - else - skb_len = (*hdr)->hdr.l3.length; - - if (!skb_len) - return NULL; - if ((skb_len >= card->options.rx_sg_cb) && - (!(card->info.type == QETH_CARD_TYPE_OSN)) && - (!atomic_read(&card->force_alloc_skb))) { - use_rx_sg = 1; - } else { - if (card->options.fake_ll) { - if (card->dev->type == ARPHRD_IEEE802_TR) { - if (!(skb = qeth_get_skb(skb_len + - QETH_FAKE_LL_LEN_TR, *hdr))) - goto no_mem; - skb_reserve(skb, QETH_FAKE_LL_LEN_TR); - } else { - if (!(skb = qeth_get_skb(skb_len + - QETH_FAKE_LL_LEN_ETH, *hdr))) - goto no_mem; - skb_reserve(skb, QETH_FAKE_LL_LEN_ETH); - } - } else { - skb = qeth_get_skb(skb_len, *hdr); - if (!skb) - goto no_mem; - } - } - - data_ptr = element->addr + offset; - while (skb_len) { - data_len = min(skb_len, (int)(element->length - offset)); - if (data_len) { - if (use_rx_sg) { - if (qeth_create_skb_frag(element, &skb, offset, - &frag, data_len)) - goto no_mem; - } else { - memcpy(skb_put(skb, data_len), data_ptr, - data_len); - } - } - skb_len -= data_len; - if (skb_len){ - if (qeth_is_last_sbale(element)){ - QETH_DBF_TEXT(trace,4,"unexeob"); - QETH_DBF_TEXT_(trace,4,"%s",CARD_BUS_ID(card)); - QETH_DBF_TEXT(qerr,2,"unexeob"); - QETH_DBF_TEXT_(qerr,2,"%s",CARD_BUS_ID(card)); - QETH_DBF_HEX(misc,4,buffer,sizeof(*buffer)); - dev_kfree_skb_any(skb); - card->stats.rx_errors++; - return NULL; - } - element++; - offset = 0; - data_ptr = element->addr; - } else { - offset += data_len; - } - } - *__element = element; - *__offset = offset; - if (use_rx_sg && card->options.performance_stats) { - card->perf_stats.sg_skbs_rx++; - card->perf_stats.sg_frags_rx += skb_shinfo(skb)->nr_frags; - } - return skb; -no_mem: - if (net_ratelimit()){ - PRINT_WARN("No memory for packet received on %s.\n", - QETH_CARD_IFNAME(card)); - QETH_DBF_TEXT(trace,2,"noskbmem"); - QETH_DBF_TEXT_(trace,2,"%s",CARD_BUS_ID(card)); - } - card->stats.rx_dropped++; - return NULL; -} - -static __be16 -qeth_type_trans(struct sk_buff *skb, struct net_device *dev) -{ - struct qeth_card *card; - struct ethhdr *eth; - - QETH_DBF_TEXT(trace,6,"typtrans"); - - card = (struct qeth_card *)dev->priv; -#ifdef CONFIG_TR - if ((card->info.link_type == QETH_LINK_TYPE_HSTR) || - (card->info.link_type == QETH_LINK_TYPE_LANE_TR)) - return tr_type_trans(skb,dev); -#endif /* CONFIG_TR */ - skb_reset_mac_header(skb); - skb_pull(skb, ETH_HLEN ); - eth = eth_hdr(skb); - - if (*eth->h_dest & 1) { - if (memcmp(eth->h_dest, dev->broadcast, ETH_ALEN) == 0) - skb->pkt_type = PACKET_BROADCAST; - else - skb->pkt_type = PACKET_MULTICAST; - } else if (memcmp(eth->h_dest, dev->dev_addr, ETH_ALEN)) - skb->pkt_type = PACKET_OTHERHOST; - - if (ntohs(eth->h_proto) >= 1536) - return eth->h_proto; - if (*(unsigned short *) (skb->data) == 0xFFFF) - return htons(ETH_P_802_3); - return htons(ETH_P_802_2); -} - -static void -qeth_rebuild_skb_fake_ll_tr(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *hdr) -{ - struct trh_hdr *fake_hdr; - struct trllc *fake_llc; - struct iphdr *ip_hdr; - - QETH_DBF_TEXT(trace,5,"skbfktr"); - skb_set_mac_header(skb, (int)-QETH_FAKE_LL_LEN_TR); - /* this is a fake ethernet header */ - fake_hdr = tr_hdr(skb); - - /* the destination MAC address */ - switch (skb->pkt_type){ - case PACKET_MULTICAST: - switch (skb->protocol){ -#ifdef CONFIG_QETH_IPV6 - case __constant_htons(ETH_P_IPV6): - ndisc_mc_map((struct in6_addr *) - skb->data + QETH_FAKE_LL_V6_ADDR_POS, - fake_hdr->daddr, card->dev, 0); - break; -#endif /* CONFIG_QETH_IPV6 */ - case __constant_htons(ETH_P_IP): - ip_hdr = (struct iphdr *)skb->data; - ip_tr_mc_map(ip_hdr->daddr, fake_hdr->daddr); - break; - default: - memcpy(fake_hdr->daddr, card->dev->dev_addr, TR_ALEN); - } - break; - case PACKET_BROADCAST: - memset(fake_hdr->daddr, 0xff, TR_ALEN); - break; - default: - memcpy(fake_hdr->daddr, card->dev->dev_addr, TR_ALEN); - } - /* the source MAC address */ - if (hdr->hdr.l3.ext_flags & QETH_HDR_EXT_SRC_MAC_ADDR) - memcpy(fake_hdr->saddr, &hdr->hdr.l3.dest_addr[2], TR_ALEN); - else - memset(fake_hdr->saddr, 0, TR_ALEN); - fake_hdr->rcf=0; - fake_llc = (struct trllc*)&(fake_hdr->rcf); - fake_llc->dsap = EXTENDED_SAP; - fake_llc->ssap = EXTENDED_SAP; - fake_llc->llc = UI_CMD; - fake_llc->protid[0] = 0; - fake_llc->protid[1] = 0; - fake_llc->protid[2] = 0; - fake_llc->ethertype = ETH_P_IP; -} - -static void -qeth_rebuild_skb_fake_ll_eth(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *hdr) -{ - struct ethhdr *fake_hdr; - struct iphdr *ip_hdr; - - QETH_DBF_TEXT(trace,5,"skbfketh"); - skb_set_mac_header(skb, -QETH_FAKE_LL_LEN_ETH); - /* this is a fake ethernet header */ - fake_hdr = eth_hdr(skb); - - /* the destination MAC address */ - switch (skb->pkt_type){ - case PACKET_MULTICAST: - switch (skb->protocol){ -#ifdef CONFIG_QETH_IPV6 - case __constant_htons(ETH_P_IPV6): - ndisc_mc_map((struct in6_addr *) - skb->data + QETH_FAKE_LL_V6_ADDR_POS, - fake_hdr->h_dest, card->dev, 0); - break; -#endif /* CONFIG_QETH_IPV6 */ - case __constant_htons(ETH_P_IP): - ip_hdr = (struct iphdr *)skb->data; - ip_eth_mc_map(ip_hdr->daddr, fake_hdr->h_dest); - break; - default: - memcpy(fake_hdr->h_dest, card->dev->dev_addr, ETH_ALEN); - } - break; - case PACKET_BROADCAST: - memset(fake_hdr->h_dest, 0xff, ETH_ALEN); - break; - default: - memcpy(fake_hdr->h_dest, card->dev->dev_addr, ETH_ALEN); - } - /* the source MAC address */ - if (hdr->hdr.l3.ext_flags & QETH_HDR_EXT_SRC_MAC_ADDR) - memcpy(fake_hdr->h_source, &hdr->hdr.l3.dest_addr[2], ETH_ALEN); - else - memset(fake_hdr->h_source, 0, ETH_ALEN); - /* the protocol */ - fake_hdr->h_proto = skb->protocol; -} - -static inline void -qeth_rebuild_skb_fake_ll(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *hdr) -{ - if (card->dev->type == ARPHRD_IEEE802_TR) - qeth_rebuild_skb_fake_ll_tr(card, skb, hdr); - else - qeth_rebuild_skb_fake_ll_eth(card, skb, hdr); -} - -static inline void -qeth_layer2_rebuild_skb(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *hdr) -{ - skb->pkt_type = PACKET_HOST; - skb->protocol = qeth_type_trans(skb, skb->dev); - if (card->options.checksum_type == NO_CHECKSUMMING) - skb->ip_summed = CHECKSUM_UNNECESSARY; - else - skb->ip_summed = CHECKSUM_NONE; - *((__u32 *)skb->cb) = ++card->seqno.pkt_seqno; -} - -static __u16 -qeth_rebuild_skb(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr *hdr) -{ - unsigned short vlan_id = 0; -#ifdef CONFIG_QETH_IPV6 - if (hdr->hdr.l3.flags & QETH_HDR_PASSTHRU) { - skb->pkt_type = PACKET_HOST; - skb->protocol = qeth_type_trans(skb, card->dev); - return 0; - } -#endif /* CONFIG_QETH_IPV6 */ - skb->protocol = htons((hdr->hdr.l3.flags & QETH_HDR_IPV6)? ETH_P_IPV6 : - ETH_P_IP); - switch (hdr->hdr.l3.flags & QETH_HDR_CAST_MASK){ - case QETH_CAST_UNICAST: - skb->pkt_type = PACKET_HOST; - break; - case QETH_CAST_MULTICAST: - skb->pkt_type = PACKET_MULTICAST; - card->stats.multicast++; - break; - case QETH_CAST_BROADCAST: - skb->pkt_type = PACKET_BROADCAST; - card->stats.multicast++; - break; - case QETH_CAST_ANYCAST: - case QETH_CAST_NOCAST: - default: - skb->pkt_type = PACKET_HOST; - } - - if (hdr->hdr.l3.ext_flags & - (QETH_HDR_EXT_VLAN_FRAME | QETH_HDR_EXT_INCLUDE_VLAN_TAG)) { - vlan_id = (hdr->hdr.l3.ext_flags & QETH_HDR_EXT_VLAN_FRAME)? - hdr->hdr.l3.vlan_id : *((u16 *)&hdr->hdr.l3.dest_addr[12]); - } - - if (card->options.fake_ll) - qeth_rebuild_skb_fake_ll(card, skb, hdr); - else - skb_reset_mac_header(skb); - skb->ip_summed = card->options.checksum_type; - if (card->options.checksum_type == HW_CHECKSUMMING){ - if ( (hdr->hdr.l3.ext_flags & - (QETH_HDR_EXT_CSUM_HDR_REQ | - QETH_HDR_EXT_CSUM_TRANSP_REQ)) == - (QETH_HDR_EXT_CSUM_HDR_REQ | - QETH_HDR_EXT_CSUM_TRANSP_REQ) ) - skb->ip_summed = CHECKSUM_UNNECESSARY; - else - skb->ip_summed = SW_CHECKSUMMING; - } - return vlan_id; -} - -static void -qeth_process_inbound_buffer(struct qeth_card *card, - struct qeth_qdio_buffer *buf, int index) -{ - struct qdio_buffer_element *element; - struct sk_buff *skb; - struct qeth_hdr *hdr; - int offset; - int rxrc; - __u16 vlan_tag = 0; - - /* get first element of current buffer */ - element = (struct qdio_buffer_element *)&buf->buffer->element[0]; - offset = 0; - if (card->options.performance_stats) - card->perf_stats.bufs_rec++; - while((skb = qeth_get_next_skb(card, buf->buffer, &element, - &offset, &hdr))) { - skb->dev = card->dev; - if (hdr->hdr.l2.id == QETH_HEADER_TYPE_LAYER2) - qeth_layer2_rebuild_skb(card, skb, hdr); - else if (hdr->hdr.l3.id == QETH_HEADER_TYPE_LAYER3) - vlan_tag = qeth_rebuild_skb(card, skb, hdr); - else if (hdr->hdr.osn.id == QETH_HEADER_TYPE_OSN) { - skb_push(skb, sizeof(struct qeth_hdr)); - skb_copy_to_linear_data(skb, hdr, - sizeof(struct qeth_hdr)); - } else { /* unknown header type */ - dev_kfree_skb_any(skb); - QETH_DBF_TEXT(trace, 3, "inbunkno"); - QETH_DBF_HEX(control, 3, hdr, QETH_DBF_CONTROL_LEN); - continue; - } - /* is device UP ? */ - if (!(card->dev->flags & IFF_UP)){ - dev_kfree_skb_any(skb); - continue; - } - if (card->info.type == QETH_CARD_TYPE_OSN) - rxrc = card->osn_info.data_cb(skb); - else -#ifdef CONFIG_QETH_VLAN - if (vlan_tag) - if (card->vlangrp) - vlan_hwaccel_rx(skb, card->vlangrp, vlan_tag); - else { - dev_kfree_skb_any(skb); - continue; - } - else -#endif - rxrc = netif_rx(skb); - card->dev->last_rx = jiffies; - card->stats.rx_packets++; - card->stats.rx_bytes += skb->len; - } -} - -static int -qeth_init_input_buffer(struct qeth_card *card, struct qeth_qdio_buffer *buf) -{ - struct qeth_buffer_pool_entry *pool_entry; - int i; - - pool_entry = qeth_find_free_buffer_pool_entry(card); - if (!pool_entry) - return 1; - /* - * since the buffer is accessed only from the input_tasklet - * there shouldn't be a need to synchronize; also, since we use - * the QETH_IN_BUF_REQUEUE_THRESHOLD we should never run out off - * buffers - */ - BUG_ON(!pool_entry); - - buf->pool_entry = pool_entry; - for(i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i){ - buf->buffer->element[i].length = PAGE_SIZE; - buf->buffer->element[i].addr = pool_entry->elements[i]; - if (i == QETH_MAX_BUFFER_ELEMENTS(card) - 1) - buf->buffer->element[i].flags = SBAL_FLAGS_LAST_ENTRY; - else - buf->buffer->element[i].flags = 0; - } - buf->state = QETH_QDIO_BUF_EMPTY; - return 0; -} - -static void -qeth_clear_output_buffer(struct qeth_qdio_out_q *queue, - struct qeth_qdio_out_buffer *buf) -{ - int i; - struct sk_buff *skb; - - /* is PCI flag set on buffer? */ - if (buf->buffer->element[0].flags & 0x40) - atomic_dec(&queue->set_pci_flags_count); - - while ((skb = skb_dequeue(&buf->skb_list))){ - atomic_dec(&skb->users); - dev_kfree_skb_any(skb); - } - qeth_eddp_buf_release_contexts(buf); - for(i = 0; i < QETH_MAX_BUFFER_ELEMENTS(queue->card); ++i){ - buf->buffer->element[i].length = 0; - buf->buffer->element[i].addr = NULL; - buf->buffer->element[i].flags = 0; - } - buf->next_element_to_fill = 0; - atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY); -} - -static void -qeth_queue_input_buffer(struct qeth_card *card, int index) -{ - struct qeth_qdio_q *queue = card->qdio.in_q; - int count; - int i; - int rc; - int newcount = 0; - - QETH_DBF_TEXT(trace,6,"queinbuf"); - count = (index < queue->next_buf_to_init)? - card->qdio.in_buf_pool.buf_count - - (queue->next_buf_to_init - index) : - card->qdio.in_buf_pool.buf_count - - (queue->next_buf_to_init + QDIO_MAX_BUFFERS_PER_Q - index); - /* only requeue at a certain threshold to avoid SIGAs */ - if (count >= QETH_IN_BUF_REQUEUE_THRESHOLD(card)){ - for (i = queue->next_buf_to_init; - i < queue->next_buf_to_init + count; ++i) { - if (qeth_init_input_buffer(card, - &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q])) { - break; - } else { - newcount++; - } - } - - if (newcount < count) { - /* we are in memory shortage so we switch back to - traditional skb allocation and drop packages */ - if (!atomic_read(&card->force_alloc_skb) && - net_ratelimit()) - PRINT_WARN("Switch to alloc skb\n"); - atomic_set(&card->force_alloc_skb, 3); - count = newcount; - } else { - if ((atomic_read(&card->force_alloc_skb) == 1) && - net_ratelimit()) - PRINT_WARN("Switch to sg\n"); - atomic_add_unless(&card->force_alloc_skb, -1, 0); - } - - /* - * according to old code it should be avoided to requeue all - * 128 buffers in order to benefit from PCI avoidance. - * this function keeps at least one buffer (the buffer at - * 'index') un-requeued -> this buffer is the first buffer that - * will be requeued the next time - */ - if (card->options.performance_stats) { - card->perf_stats.inbound_do_qdio_cnt++; - card->perf_stats.inbound_do_qdio_start_time = - qeth_get_micros(); - } - rc = do_QDIO(CARD_DDEV(card), - QDIO_FLAG_SYNC_INPUT | QDIO_FLAG_UNDER_INTERRUPT, - 0, queue->next_buf_to_init, count, NULL); - if (card->options.performance_stats) - card->perf_stats.inbound_do_qdio_time += - qeth_get_micros() - - card->perf_stats.inbound_do_qdio_start_time; - if (rc){ - PRINT_WARN("qeth_queue_input_buffer's do_QDIO " - "return %i (device %s).\n", - rc, CARD_DDEV_ID(card)); - QETH_DBF_TEXT(trace,2,"qinberr"); - QETH_DBF_TEXT_(trace,2,"%s",CARD_BUS_ID(card)); - } - queue->next_buf_to_init = (queue->next_buf_to_init + count) % - QDIO_MAX_BUFFERS_PER_Q; - } -} - -static inline void -qeth_put_buffer_pool_entry(struct qeth_card *card, - struct qeth_buffer_pool_entry *entry) -{ - QETH_DBF_TEXT(trace, 6, "ptbfplen"); - list_add_tail(&entry->list, &card->qdio.in_buf_pool.entry_list); -} - -static void -qeth_qdio_input_handler(struct ccw_device * ccwdev, unsigned int status, - unsigned int qdio_err, unsigned int siga_err, - unsigned int queue, int first_element, int count, - unsigned long card_ptr) -{ - struct net_device *net_dev; - struct qeth_card *card; - struct qeth_qdio_buffer *buffer; - int index; - int i; - - QETH_DBF_TEXT(trace, 6, "qdinput"); - card = (struct qeth_card *) card_ptr; - net_dev = card->dev; - if (card->options.performance_stats) { - card->perf_stats.inbound_cnt++; - card->perf_stats.inbound_start_time = qeth_get_micros(); - } - if (status & QDIO_STATUS_LOOK_FOR_ERROR) { - if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION){ - QETH_DBF_TEXT(trace, 1,"qdinchk"); - QETH_DBF_TEXT_(trace,1,"%s",CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace,1,"%04X%04X",first_element,count); - QETH_DBF_TEXT_(trace,1,"%04X%04X", queue, status); - qeth_schedule_recovery(card); - return; - } - } - for (i = first_element; i < (first_element + count); ++i) { - index = i % QDIO_MAX_BUFFERS_PER_Q; - buffer = &card->qdio.in_q->bufs[index]; - if (!((status & QDIO_STATUS_LOOK_FOR_ERROR) && - qeth_check_qdio_errors(buffer->buffer, - qdio_err, siga_err,"qinerr"))) - qeth_process_inbound_buffer(card, buffer, index); - /* clear buffer and give back to hardware */ - qeth_put_buffer_pool_entry(card, buffer->pool_entry); - qeth_queue_input_buffer(card, index); - } - if (card->options.performance_stats) - card->perf_stats.inbound_time += qeth_get_micros() - - card->perf_stats.inbound_start_time; -} - -static int -qeth_handle_send_error(struct qeth_card *card, - struct qeth_qdio_out_buffer *buffer, - unsigned int qdio_err, unsigned int siga_err) -{ - int sbalf15 = buffer->buffer->element[15].flags & 0xff; - int cc = siga_err & 3; - - QETH_DBF_TEXT(trace, 6, "hdsnderr"); - qeth_check_qdio_errors(buffer->buffer, qdio_err, siga_err, "qouterr"); - switch (cc) { - case 0: - if (qdio_err){ - QETH_DBF_TEXT(trace, 1,"lnkfail"); - QETH_DBF_TEXT_(trace,1,"%s",CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace,1,"%04x %02x", - (u16)qdio_err, (u8)sbalf15); - return QETH_SEND_ERROR_LINK_FAILURE; - } - return QETH_SEND_ERROR_NONE; - case 2: - if (siga_err & QDIO_SIGA_ERROR_B_BIT_SET) { - QETH_DBF_TEXT(trace, 1, "SIGAcc2B"); - QETH_DBF_TEXT_(trace,1,"%s",CARD_BUS_ID(card)); - return QETH_SEND_ERROR_KICK_IT; - } - if ((sbalf15 >= 15) && (sbalf15 <= 31)) - return QETH_SEND_ERROR_RETRY; - return QETH_SEND_ERROR_LINK_FAILURE; - /* look at qdio_error and sbalf 15 */ - case 1: - QETH_DBF_TEXT(trace, 1, "SIGAcc1"); - QETH_DBF_TEXT_(trace,1,"%s",CARD_BUS_ID(card)); - return QETH_SEND_ERROR_LINK_FAILURE; - case 3: - default: - QETH_DBF_TEXT(trace, 1, "SIGAcc3"); - QETH_DBF_TEXT_(trace,1,"%s",CARD_BUS_ID(card)); - return QETH_SEND_ERROR_KICK_IT; - } -} - -void -qeth_flush_buffers(struct qeth_qdio_out_q *queue, int under_int, - int index, int count) -{ - struct qeth_qdio_out_buffer *buf; - int rc; - int i; - unsigned int qdio_flags; - - QETH_DBF_TEXT(trace, 6, "flushbuf"); - - for (i = index; i < index + count; ++i) { - buf = &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q]; - buf->buffer->element[buf->next_element_to_fill - 1].flags |= - SBAL_FLAGS_LAST_ENTRY; - - if (queue->card->info.type == QETH_CARD_TYPE_IQD) - continue; - - if (!queue->do_pack){ - if ((atomic_read(&queue->used_buffers) >= - (QETH_HIGH_WATERMARK_PACK - - QETH_WATERMARK_PACK_FUZZ)) && - !atomic_read(&queue->set_pci_flags_count)){ - /* it's likely that we'll go to packing - * mode soon */ - atomic_inc(&queue->set_pci_flags_count); - buf->buffer->element[0].flags |= 0x40; - } - } else { - if (!atomic_read(&queue->set_pci_flags_count)){ - /* - * there's no outstanding PCI any more, so we - * have to request a PCI to be sure that the PCI - * will wake at some time in the future then we - * can flush packed buffers that might still be - * hanging around, which can happen if no - * further send was requested by the stack - */ - atomic_inc(&queue->set_pci_flags_count); - buf->buffer->element[0].flags |= 0x40; - } - } - } - - queue->card->dev->trans_start = jiffies; - if (queue->card->options.performance_stats) { - queue->card->perf_stats.outbound_do_qdio_cnt++; - queue->card->perf_stats.outbound_do_qdio_start_time = - qeth_get_micros(); - } - qdio_flags = QDIO_FLAG_SYNC_OUTPUT; - if (under_int) - qdio_flags |= QDIO_FLAG_UNDER_INTERRUPT; - if (atomic_read(&queue->set_pci_flags_count)) - qdio_flags |= QDIO_FLAG_PCI_OUT; - rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags, - queue->queue_no, index, count, NULL); - if (queue->card->options.performance_stats) - queue->card->perf_stats.outbound_do_qdio_time += - qeth_get_micros() - - queue->card->perf_stats.outbound_do_qdio_start_time; - if (rc){ - QETH_DBF_TEXT(trace, 2, "flushbuf"); - QETH_DBF_TEXT_(trace, 2, " err%d", rc); - QETH_DBF_TEXT_(trace, 2, "%s", CARD_DDEV_ID(queue->card)); - queue->card->stats.tx_errors += count; - /* this must not happen under normal circumstances. if it - * happens something is really wrong -> recover */ - qeth_schedule_recovery(queue->card); - return; - } - atomic_add(count, &queue->used_buffers); - if (queue->card->options.performance_stats) - queue->card->perf_stats.bufs_sent += count; -} - -/* - * Switched to packing state if the number of used buffers on a queue - * reaches a certain limit. - */ -static void -qeth_switch_to_packing_if_needed(struct qeth_qdio_out_q *queue) -{ - if (!queue->do_pack) { - if (atomic_read(&queue->used_buffers) - >= QETH_HIGH_WATERMARK_PACK){ - /* switch non-PACKING -> PACKING */ - QETH_DBF_TEXT(trace, 6, "np->pack"); - if (queue->card->options.performance_stats) - queue->card->perf_stats.sc_dp_p++; - queue->do_pack = 1; - } - } -} - -/* - * Switches from packing to non-packing mode. If there is a packing - * buffer on the queue this buffer will be prepared to be flushed. - * In that case 1 is returned to inform the caller. If no buffer - * has to be flushed, zero is returned. - */ -static int -qeth_switch_to_nonpacking_if_needed(struct qeth_qdio_out_q *queue) -{ - struct qeth_qdio_out_buffer *buffer; - int flush_count = 0; - - if (queue->do_pack) { - if (atomic_read(&queue->used_buffers) - <= QETH_LOW_WATERMARK_PACK) { - /* switch PACKING -> non-PACKING */ - QETH_DBF_TEXT(trace, 6, "pack->np"); - if (queue->card->options.performance_stats) - queue->card->perf_stats.sc_p_dp++; - queue->do_pack = 0; - /* flush packing buffers */ - buffer = &queue->bufs[queue->next_buf_to_fill]; - if ((atomic_read(&buffer->state) == - QETH_QDIO_BUF_EMPTY) && - (buffer->next_element_to_fill > 0)) { - atomic_set(&buffer->state,QETH_QDIO_BUF_PRIMED); - flush_count++; - queue->next_buf_to_fill = - (queue->next_buf_to_fill + 1) % - QDIO_MAX_BUFFERS_PER_Q; - } - } - } - return flush_count; -} - -/* - * Called to flush a packing buffer if no more pci flags are on the queue. - * Checks if there is a packing buffer and prepares it to be flushed. - * In that case returns 1, otherwise zero. - */ -static int -qeth_flush_buffers_on_no_pci(struct qeth_qdio_out_q *queue) -{ - struct qeth_qdio_out_buffer *buffer; - - buffer = &queue->bufs[queue->next_buf_to_fill]; - if((atomic_read(&buffer->state) == QETH_QDIO_BUF_EMPTY) && - (buffer->next_element_to_fill > 0)){ - /* it's a packing buffer */ - atomic_set(&buffer->state, QETH_QDIO_BUF_PRIMED); - queue->next_buf_to_fill = - (queue->next_buf_to_fill + 1) % QDIO_MAX_BUFFERS_PER_Q; - return 1; - } - return 0; -} - -static void -qeth_check_outbound_queue(struct qeth_qdio_out_q *queue) -{ - int index; - int flush_cnt = 0; - int q_was_packing = 0; - - /* - * check if weed have to switch to non-packing mode or if - * we have to get a pci flag out on the queue - */ - if ((atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) || - !atomic_read(&queue->set_pci_flags_count)){ - if (atomic_xchg(&queue->state, QETH_OUT_Q_LOCKED_FLUSH) == - QETH_OUT_Q_UNLOCKED) { - /* - * If we get in here, there was no action in - * do_send_packet. So, we check if there is a - * packing buffer to be flushed here. - */ - netif_stop_queue(queue->card->dev); - index = queue->next_buf_to_fill; - q_was_packing = queue->do_pack; - flush_cnt += qeth_switch_to_nonpacking_if_needed(queue); - if (!flush_cnt && - !atomic_read(&queue->set_pci_flags_count)) - flush_cnt += - qeth_flush_buffers_on_no_pci(queue); - if (queue->card->options.performance_stats && - q_was_packing) - queue->card->perf_stats.bufs_sent_pack += - flush_cnt; - if (flush_cnt) - qeth_flush_buffers(queue, 1, index, flush_cnt); - atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); - } - } -} - -static void -qeth_qdio_output_handler(struct ccw_device * ccwdev, unsigned int status, - unsigned int qdio_error, unsigned int siga_error, - unsigned int __queue, int first_element, int count, - unsigned long card_ptr) -{ - struct qeth_card *card = (struct qeth_card *) card_ptr; - struct qeth_qdio_out_q *queue = card->qdio.out_qs[__queue]; - struct qeth_qdio_out_buffer *buffer; - int i; - - QETH_DBF_TEXT(trace, 6, "qdouhdl"); - if (status & QDIO_STATUS_LOOK_FOR_ERROR) { - if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION){ - QETH_DBF_TEXT(trace, 2, "achkcond"); - QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 2, "%08x", status); - netif_stop_queue(card->dev); - qeth_schedule_recovery(card); - return; - } - } - if (card->options.performance_stats) { - card->perf_stats.outbound_handler_cnt++; - card->perf_stats.outbound_handler_start_time = - qeth_get_micros(); - } - for(i = first_element; i < (first_element + count); ++i){ - buffer = &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q]; - /*we only handle the KICK_IT error by doing a recovery */ - if (qeth_handle_send_error(card, buffer, - qdio_error, siga_error) - == QETH_SEND_ERROR_KICK_IT){ - netif_stop_queue(card->dev); - qeth_schedule_recovery(card); - return; - } - qeth_clear_output_buffer(queue, buffer); - } - atomic_sub(count, &queue->used_buffers); - /* check if we need to do something on this outbound queue */ - if (card->info.type != QETH_CARD_TYPE_IQD) - qeth_check_outbound_queue(queue); - - netif_wake_queue(queue->card->dev); - if (card->options.performance_stats) - card->perf_stats.outbound_handler_time += qeth_get_micros() - - card->perf_stats.outbound_handler_start_time; -} - -static void -qeth_create_qib_param_field(struct qeth_card *card, char *param_field) -{ - - param_field[0] = _ascebc['P']; - param_field[1] = _ascebc['C']; - param_field[2] = _ascebc['I']; - param_field[3] = _ascebc['T']; - *((unsigned int *) (¶m_field[4])) = QETH_PCI_THRESHOLD_A(card); - *((unsigned int *) (¶m_field[8])) = QETH_PCI_THRESHOLD_B(card); - *((unsigned int *) (¶m_field[12])) = QETH_PCI_TIMER_VALUE(card); -} - -static void -qeth_create_qib_param_field_blkt(struct qeth_card *card, char *param_field) -{ - param_field[16] = _ascebc['B']; - param_field[17] = _ascebc['L']; - param_field[18] = _ascebc['K']; - param_field[19] = _ascebc['T']; - *((unsigned int *) (¶m_field[20])) = card->info.blkt.time_total; - *((unsigned int *) (¶m_field[24])) = card->info.blkt.inter_packet; - *((unsigned int *) (¶m_field[28])) = card->info.blkt.inter_packet_jumbo; -} - -static void -qeth_initialize_working_pool_list(struct qeth_card *card) -{ - struct qeth_buffer_pool_entry *entry; - - QETH_DBF_TEXT(trace,5,"inwrklst"); - - list_for_each_entry(entry, - &card->qdio.init_pool.entry_list, init_list) { - qeth_put_buffer_pool_entry(card,entry); - } -} - -static void -qeth_clear_working_pool_list(struct qeth_card *card) -{ - struct qeth_buffer_pool_entry *pool_entry, *tmp; - - QETH_DBF_TEXT(trace,5,"clwrklst"); - list_for_each_entry_safe(pool_entry, tmp, - &card->qdio.in_buf_pool.entry_list, list){ - list_del(&pool_entry->list); - } -} - -static void -qeth_free_buffer_pool(struct qeth_card *card) -{ - struct qeth_buffer_pool_entry *pool_entry, *tmp; - int i=0; - QETH_DBF_TEXT(trace,5,"freepool"); - list_for_each_entry_safe(pool_entry, tmp, - &card->qdio.init_pool.entry_list, init_list){ - for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) - free_page((unsigned long)pool_entry->elements[i]); - list_del(&pool_entry->init_list); - kfree(pool_entry); - } -} - -static int -qeth_alloc_buffer_pool(struct qeth_card *card) -{ - struct qeth_buffer_pool_entry *pool_entry; - void *ptr; - int i, j; - - QETH_DBF_TEXT(trace,5,"alocpool"); - for (i = 0; i < card->qdio.init_pool.buf_count; ++i){ - pool_entry = kmalloc(sizeof(*pool_entry), GFP_KERNEL); - if (!pool_entry){ - qeth_free_buffer_pool(card); - return -ENOMEM; - } - for(j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j){ - ptr = (void *) __get_free_page(GFP_KERNEL|GFP_DMA); - if (!ptr) { - while (j > 0) - free_page((unsigned long) - pool_entry->elements[--j]); - kfree(pool_entry); - qeth_free_buffer_pool(card); - return -ENOMEM; - } - pool_entry->elements[j] = ptr; - } - list_add(&pool_entry->init_list, - &card->qdio.init_pool.entry_list); - } - return 0; -} - -int -qeth_realloc_buffer_pool(struct qeth_card *card, int bufcnt) -{ - QETH_DBF_TEXT(trace, 2, "realcbp"); - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - /* TODO: steel/add buffers from/to a running card's buffer pool (?) */ - qeth_clear_working_pool_list(card); - qeth_free_buffer_pool(card); - card->qdio.in_buf_pool.buf_count = bufcnt; - card->qdio.init_pool.buf_count = bufcnt; - return qeth_alloc_buffer_pool(card); -} - -static int -qeth_alloc_qdio_buffers(struct qeth_card *card) -{ - int i, j; - - QETH_DBF_TEXT(setup, 2, "allcqdbf"); - - if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED, - QETH_QDIO_ALLOCATED) != QETH_QDIO_UNINITIALIZED) - return 0; - - card->qdio.in_q = kmalloc(sizeof(struct qeth_qdio_q), - GFP_KERNEL|GFP_DMA); - if (!card->qdio.in_q) - goto out_nomem; - QETH_DBF_TEXT(setup, 2, "inq"); - QETH_DBF_HEX(setup, 2, &card->qdio.in_q, sizeof(void *)); - memset(card->qdio.in_q, 0, sizeof(struct qeth_qdio_q)); - /* give inbound qeth_qdio_buffers their qdio_buffers */ - for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; ++i) - card->qdio.in_q->bufs[i].buffer = - &card->qdio.in_q->qdio_bufs[i]; - /* inbound buffer pool */ - if (qeth_alloc_buffer_pool(card)) - goto out_freeinq; - /* outbound */ - card->qdio.out_qs = - kmalloc(card->qdio.no_out_queues * - sizeof(struct qeth_qdio_out_q *), GFP_KERNEL); - if (!card->qdio.out_qs) - goto out_freepool; - for (i = 0; i < card->qdio.no_out_queues; ++i) { - card->qdio.out_qs[i] = kmalloc(sizeof(struct qeth_qdio_out_q), - GFP_KERNEL|GFP_DMA); - if (!card->qdio.out_qs[i]) - goto out_freeoutq; - QETH_DBF_TEXT_(setup, 2, "outq %i", i); - QETH_DBF_HEX(setup, 2, &card->qdio.out_qs[i], sizeof(void *)); - memset(card->qdio.out_qs[i], 0, sizeof(struct qeth_qdio_out_q)); - card->qdio.out_qs[i]->queue_no = i; - /* give outbound qeth_qdio_buffers their qdio_buffers */ - for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j){ - card->qdio.out_qs[i]->bufs[j].buffer = - &card->qdio.out_qs[i]->qdio_bufs[j]; - skb_queue_head_init(&card->qdio.out_qs[i]->bufs[j]. - skb_list); - lockdep_set_class( - &card->qdio.out_qs[i]->bufs[j].skb_list.lock, - &qdio_out_skb_queue_key); - INIT_LIST_HEAD(&card->qdio.out_qs[i]->bufs[j].ctx_list); - } - } - return 0; - -out_freeoutq: - while (i > 0) - kfree(card->qdio.out_qs[--i]); - kfree(card->qdio.out_qs); - card->qdio.out_qs = NULL; -out_freepool: - qeth_free_buffer_pool(card); -out_freeinq: - kfree(card->qdio.in_q); - card->qdio.in_q = NULL; -out_nomem: - atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); - return -ENOMEM; -} - -static void -qeth_free_qdio_buffers(struct qeth_card *card) -{ - int i, j; - - QETH_DBF_TEXT(trace, 2, "freeqdbf"); - if (atomic_xchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == - QETH_QDIO_UNINITIALIZED) - return; - kfree(card->qdio.in_q); - card->qdio.in_q = NULL; - /* inbound buffer pool */ - qeth_free_buffer_pool(card); - /* free outbound qdio_qs */ - if (card->qdio.out_qs) { - for (i = 0; i < card->qdio.no_out_queues; ++i) { - for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) - qeth_clear_output_buffer(card->qdio.out_qs[i], - &card->qdio.out_qs[i]->bufs[j]); - kfree(card->qdio.out_qs[i]); - } - kfree(card->qdio.out_qs); - card->qdio.out_qs = NULL; - } -} - -static void -qeth_clear_qdio_buffers(struct qeth_card *card) -{ - int i, j; - - QETH_DBF_TEXT(trace, 2, "clearqdbf"); - /* clear outbound buffers to free skbs */ - for (i = 0; i < card->qdio.no_out_queues; ++i) - if (card->qdio.out_qs && card->qdio.out_qs[i]) { - for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) - qeth_clear_output_buffer(card->qdio.out_qs[i], - &card->qdio.out_qs[i]->bufs[j]); - } -} - -static void -qeth_init_qdio_info(struct qeth_card *card) -{ - QETH_DBF_TEXT(setup, 4, "intqdinf"); - atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); - /* inbound */ - card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT; - card->qdio.init_pool.buf_count = QETH_IN_BUF_COUNT_DEFAULT; - card->qdio.in_buf_pool.buf_count = card->qdio.init_pool.buf_count; - INIT_LIST_HEAD(&card->qdio.in_buf_pool.entry_list); - INIT_LIST_HEAD(&card->qdio.init_pool.entry_list); -} - -static int -qeth_init_qdio_queues(struct qeth_card *card) -{ - int i, j; - int rc; - - QETH_DBF_TEXT(setup, 2, "initqdqs"); - - /* inbound queue */ - memset(card->qdio.in_q->qdio_bufs, 0, - QDIO_MAX_BUFFERS_PER_Q * sizeof(struct qdio_buffer)); - qeth_initialize_working_pool_list(card); - /*give only as many buffers to hardware as we have buffer pool entries*/ - for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; ++i) - qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]); - card->qdio.in_q->next_buf_to_init = card->qdio.in_buf_pool.buf_count - 1; - rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0, - card->qdio.in_buf_pool.buf_count - 1, NULL); - if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - return rc; - } - rc = qdio_synchronize(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0); - if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - return rc; - } - /* outbound queue */ - for (i = 0; i < card->qdio.no_out_queues; ++i){ - memset(card->qdio.out_qs[i]->qdio_bufs, 0, - QDIO_MAX_BUFFERS_PER_Q * sizeof(struct qdio_buffer)); - for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j){ - qeth_clear_output_buffer(card->qdio.out_qs[i], - &card->qdio.out_qs[i]->bufs[j]); - } - card->qdio.out_qs[i]->card = card; - card->qdio.out_qs[i]->next_buf_to_fill = 0; - card->qdio.out_qs[i]->do_pack = 0; - atomic_set(&card->qdio.out_qs[i]->used_buffers,0); - atomic_set(&card->qdio.out_qs[i]->set_pci_flags_count, 0); - atomic_set(&card->qdio.out_qs[i]->state, - QETH_OUT_Q_UNLOCKED); - } - return 0; -} - -static int -qeth_qdio_establish(struct qeth_card *card) -{ - struct qdio_initialize init_data; - char *qib_param_field; - struct qdio_buffer **in_sbal_ptrs; - struct qdio_buffer **out_sbal_ptrs; - int i, j, k; - int rc = 0; - - QETH_DBF_TEXT(setup, 2, "qdioest"); - - qib_param_field = kzalloc(QDIO_MAX_BUFFERS_PER_Q * sizeof(char), - GFP_KERNEL); - if (!qib_param_field) - return -ENOMEM; - - qeth_create_qib_param_field(card, qib_param_field); - qeth_create_qib_param_field_blkt(card, qib_param_field); - - in_sbal_ptrs = kmalloc(QDIO_MAX_BUFFERS_PER_Q * sizeof(void *), - GFP_KERNEL); - if (!in_sbal_ptrs) { - kfree(qib_param_field); - return -ENOMEM; - } - for(i = 0; i < QDIO_MAX_BUFFERS_PER_Q; ++i) - in_sbal_ptrs[i] = (struct qdio_buffer *) - virt_to_phys(card->qdio.in_q->bufs[i].buffer); - - out_sbal_ptrs = - kmalloc(card->qdio.no_out_queues * QDIO_MAX_BUFFERS_PER_Q * - sizeof(void *), GFP_KERNEL); - if (!out_sbal_ptrs) { - kfree(in_sbal_ptrs); - kfree(qib_param_field); - return -ENOMEM; - } - for(i = 0, k = 0; i < card->qdio.no_out_queues; ++i) - for(j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j, ++k){ - out_sbal_ptrs[k] = (struct qdio_buffer *) - virt_to_phys(card->qdio.out_qs[i]-> - bufs[j].buffer); - } - - memset(&init_data, 0, sizeof(struct qdio_initialize)); - init_data.cdev = CARD_DDEV(card); - init_data.q_format = qeth_get_qdio_q_format(card); - init_data.qib_param_field_format = 0; - init_data.qib_param_field = qib_param_field; - init_data.min_input_threshold = QETH_MIN_INPUT_THRESHOLD; - init_data.max_input_threshold = QETH_MAX_INPUT_THRESHOLD; - init_data.min_output_threshold = QETH_MIN_OUTPUT_THRESHOLD; - init_data.max_output_threshold = QETH_MAX_OUTPUT_THRESHOLD; - init_data.no_input_qs = 1; - init_data.no_output_qs = card->qdio.no_out_queues; - init_data.input_handler = (qdio_handler_t *) - qeth_qdio_input_handler; - init_data.output_handler = (qdio_handler_t *) - qeth_qdio_output_handler; - init_data.int_parm = (unsigned long) card; - init_data.flags = QDIO_INBOUND_0COPY_SBALS | - QDIO_OUTBOUND_0COPY_SBALS | - QDIO_USE_OUTBOUND_PCIS; - init_data.input_sbal_addr_array = (void **) in_sbal_ptrs; - init_data.output_sbal_addr_array = (void **) out_sbal_ptrs; - - if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_ALLOCATED, - QETH_QDIO_ESTABLISHED) == QETH_QDIO_ALLOCATED) - if ((rc = qdio_initialize(&init_data))) - atomic_set(&card->qdio.state, QETH_QDIO_ALLOCATED); - - kfree(out_sbal_ptrs); - kfree(in_sbal_ptrs); - kfree(qib_param_field); - return rc; -} - -static int -qeth_qdio_activate(struct qeth_card *card) -{ - QETH_DBF_TEXT(setup,3,"qdioact"); - return qdio_activate(CARD_DDEV(card), 0); -} - -static int -qeth_clear_channel(struct qeth_channel *channel) -{ - unsigned long flags; - struct qeth_card *card; - int rc; - - QETH_DBF_TEXT(trace,3,"clearch"); - card = CARD_FROM_CDEV(channel->ccwdev); - spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); - rc = ccw_device_clear(channel->ccwdev, QETH_CLEAR_CHANNEL_PARM); - spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); - - if (rc) - return rc; - rc = wait_event_interruptible_timeout(card->wait_q, - channel->state==CH_STATE_STOPPED, QETH_TIMEOUT); - if (rc == -ERESTARTSYS) - return rc; - if (channel->state != CH_STATE_STOPPED) - return -ETIME; - channel->state = CH_STATE_DOWN; - return 0; -} - -static int -qeth_halt_channel(struct qeth_channel *channel) -{ - unsigned long flags; - struct qeth_card *card; - int rc; - - QETH_DBF_TEXT(trace,3,"haltch"); - card = CARD_FROM_CDEV(channel->ccwdev); - spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); - rc = ccw_device_halt(channel->ccwdev, QETH_HALT_CHANNEL_PARM); - spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); - - if (rc) - return rc; - rc = wait_event_interruptible_timeout(card->wait_q, - channel->state==CH_STATE_HALTED, QETH_TIMEOUT); - if (rc == -ERESTARTSYS) - return rc; - if (channel->state != CH_STATE_HALTED) - return -ETIME; - return 0; -} - -static int -qeth_halt_channels(struct qeth_card *card) -{ - int rc1 = 0, rc2=0, rc3 = 0; - - QETH_DBF_TEXT(trace,3,"haltchs"); - rc1 = qeth_halt_channel(&card->read); - rc2 = qeth_halt_channel(&card->write); - rc3 = qeth_halt_channel(&card->data); - if (rc1) - return rc1; - if (rc2) - return rc2; - return rc3; -} -static int -qeth_clear_channels(struct qeth_card *card) -{ - int rc1 = 0, rc2=0, rc3 = 0; - - QETH_DBF_TEXT(trace,3,"clearchs"); - rc1 = qeth_clear_channel(&card->read); - rc2 = qeth_clear_channel(&card->write); - rc3 = qeth_clear_channel(&card->data); - if (rc1) - return rc1; - if (rc2) - return rc2; - return rc3; -} - -static int -qeth_clear_halt_card(struct qeth_card *card, int halt) -{ - int rc = 0; - - QETH_DBF_TEXT(trace,3,"clhacrd"); - QETH_DBF_HEX(trace, 3, &card, sizeof(void *)); - - if (halt) - rc = qeth_halt_channels(card); - if (rc) - return rc; - return qeth_clear_channels(card); -} - -static int -qeth_qdio_clear_card(struct qeth_card *card, int use_halt) -{ - int rc = 0; - - QETH_DBF_TEXT(trace,3,"qdioclr"); - switch (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_ESTABLISHED, - QETH_QDIO_CLEANING)) { - case QETH_QDIO_ESTABLISHED: - if ((rc = qdio_cleanup(CARD_DDEV(card), - (card->info.type == QETH_CARD_TYPE_IQD) ? - QDIO_FLAG_CLEANUP_USING_HALT : - QDIO_FLAG_CLEANUP_USING_CLEAR))) - QETH_DBF_TEXT_(trace, 3, "1err%d", rc); - atomic_set(&card->qdio.state, QETH_QDIO_ALLOCATED); - break; - case QETH_QDIO_CLEANING: - return rc; - default: - break; - } - if ((rc = qeth_clear_halt_card(card, use_halt))) - QETH_DBF_TEXT_(trace, 3, "2err%d", rc); - card->state = CARD_STATE_DOWN; - return rc; -} - -static int -qeth_dm_act(struct qeth_card *card) -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(setup,2,"dmact"); - - iob = qeth_wait_for_buffer(&card->write); - memcpy(iob->data, DM_ACT, DM_ACT_SIZE); - - memcpy(QETH_DM_ACT_DEST_ADDR(iob->data), - &card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH); - memcpy(QETH_DM_ACT_CONNECTION_TOKEN(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); - rc = qeth_send_control_data(card, DM_ACT_SIZE, iob, NULL, NULL); - return rc; -} - -static int -qeth_mpc_initialize(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(setup,2,"mpcinit"); - - if ((rc = qeth_issue_next_read(card))){ - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - return rc; - } - if ((rc = qeth_cm_enable(card))){ - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - goto out_qdio; - } - if ((rc = qeth_cm_setup(card))){ - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); - goto out_qdio; - } - if ((rc = qeth_ulp_enable(card))){ - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); - goto out_qdio; - } - if ((rc = qeth_ulp_setup(card))){ - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); - goto out_qdio; - } - if ((rc = qeth_alloc_qdio_buffers(card))){ - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); - goto out_qdio; - } - if ((rc = qeth_qdio_establish(card))){ - QETH_DBF_TEXT_(setup, 2, "6err%d", rc); - qeth_free_qdio_buffers(card); - goto out_qdio; - } - if ((rc = qeth_qdio_activate(card))){ - QETH_DBF_TEXT_(setup, 2, "7err%d", rc); - goto out_qdio; - } - if ((rc = qeth_dm_act(card))){ - QETH_DBF_TEXT_(setup, 2, "8err%d", rc); - goto out_qdio; - } - - return 0; -out_qdio: - qeth_qdio_clear_card(card, card->info.type!=QETH_CARD_TYPE_IQD); - return rc; -} - -static struct net_device * -qeth_get_netdevice(enum qeth_card_types type, enum qeth_link_types linktype) -{ - struct net_device *dev = NULL; - - switch (type) { - case QETH_CARD_TYPE_OSAE: - switch (linktype) { - case QETH_LINK_TYPE_LANE_TR: - case QETH_LINK_TYPE_HSTR: -#ifdef CONFIG_TR - dev = alloc_trdev(0); -#endif /* CONFIG_TR */ - break; - default: - dev = alloc_etherdev(0); - } - break; - case QETH_CARD_TYPE_IQD: - dev = alloc_netdev(0, "hsi%d", ether_setup); - break; - case QETH_CARD_TYPE_OSN: - dev = alloc_netdev(0, "osn%d", ether_setup); - break; - default: - dev = alloc_etherdev(0); - } - return dev; -} - -/*hard_header fake function; used in case fake_ll is set */ -static int -qeth_fake_header(struct sk_buff *skb, struct net_device *dev, - unsigned short type, const void *daddr, const void *saddr, - unsigned len) -{ - if(dev->type == ARPHRD_IEEE802_TR){ - struct trh_hdr *hdr; - hdr = (struct trh_hdr *)skb_push(skb, QETH_FAKE_LL_LEN_TR); - memcpy(hdr->saddr, dev->dev_addr, TR_ALEN); - memcpy(hdr->daddr, "FAKELL", TR_ALEN); - return QETH_FAKE_LL_LEN_TR; - - } else { - struct ethhdr *hdr; - hdr = (struct ethhdr *)skb_push(skb, QETH_FAKE_LL_LEN_ETH); - memcpy(hdr->h_source, dev->dev_addr, ETH_ALEN); - memcpy(hdr->h_dest, "FAKELL", ETH_ALEN); - if (type != ETH_P_802_3) - hdr->h_proto = htons(type); - else - hdr->h_proto = htons(len); - return QETH_FAKE_LL_LEN_ETH; - - } -} - -static const struct header_ops qeth_fake_ops = { - .create = qeth_fake_header, - .parse = qeth_hard_header_parse, -}; - -static int -qeth_send_packet(struct qeth_card *, struct sk_buff *); - -static int -qeth_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) -{ - int rc; - struct qeth_card *card; - - QETH_DBF_TEXT(trace, 6, "hrdstxmi"); - card = (struct qeth_card *)dev->priv; - if (skb==NULL) { - card->stats.tx_dropped++; - card->stats.tx_errors++; - /* return OK; otherwise ksoftirqd goes to 100% */ - return NETDEV_TX_OK; - } - if ((card->state != CARD_STATE_UP) || !card->lan_online) { - card->stats.tx_dropped++; - card->stats.tx_errors++; - card->stats.tx_carrier_errors++; - dev_kfree_skb_any(skb); - /* return OK; otherwise ksoftirqd goes to 100% */ - return NETDEV_TX_OK; - } - if (card->options.performance_stats) { - card->perf_stats.outbound_cnt++; - card->perf_stats.outbound_start_time = qeth_get_micros(); - } - netif_stop_queue(dev); - if ((rc = qeth_send_packet(card, skb))) { - if (rc == -EBUSY) { - return NETDEV_TX_BUSY; - } else { - card->stats.tx_errors++; - card->stats.tx_dropped++; - dev_kfree_skb_any(skb); - /*set to OK; otherwise ksoftirqd goes to 100% */ - rc = NETDEV_TX_OK; - } - } - netif_wake_queue(dev); - if (card->options.performance_stats) - card->perf_stats.outbound_time += qeth_get_micros() - - card->perf_stats.outbound_start_time; - return rc; -} - -static int -qeth_verify_vlan_dev(struct net_device *dev, struct qeth_card *card) -{ - int rc = 0; -#ifdef CONFIG_QETH_VLAN - struct vlan_group *vg; - int i; - - if (!(vg = card->vlangrp)) - return rc; - - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++){ - if (vlan_group_get_device(vg, i) == dev){ - rc = QETH_VLAN_CARD; - break; - } - } - if (rc && !(vlan_dev_info(dev)->real_dev->priv == (void *)card)) - return 0; - -#endif - return rc; -} - -static int -qeth_verify_dev(struct net_device *dev) -{ - struct qeth_card *card; - unsigned long flags; - int rc = 0; - - read_lock_irqsave(&qeth_card_list.rwlock, flags); - list_for_each_entry(card, &qeth_card_list.list, list){ - if (card->dev == dev){ - rc = QETH_REAL_CARD; - break; - } - rc = qeth_verify_vlan_dev(dev, card); - if (rc) - break; - } - read_unlock_irqrestore(&qeth_card_list.rwlock, flags); - - return rc; -} - -static struct qeth_card * -qeth_get_card_from_dev(struct net_device *dev) -{ - struct qeth_card *card = NULL; - int rc; - - rc = qeth_verify_dev(dev); - if (rc == QETH_REAL_CARD) - card = (struct qeth_card *)dev->priv; - else if (rc == QETH_VLAN_CARD) - card = (struct qeth_card *) - vlan_dev_info(dev)->real_dev->priv; - - QETH_DBF_TEXT_(trace, 4, "%d", rc); - return card ; -} - -static void -qeth_tx_timeout(struct net_device *dev) -{ - struct qeth_card *card; - - card = (struct qeth_card *) dev->priv; - card->stats.tx_errors++; - qeth_schedule_recovery(card); -} - -static int -qeth_open(struct net_device *dev) -{ - struct qeth_card *card; - - QETH_DBF_TEXT(trace, 4, "qethopen"); - - card = (struct qeth_card *) dev->priv; - - if (card->state != CARD_STATE_SOFTSETUP) - return -ENODEV; - - if ( (card->info.type != QETH_CARD_TYPE_OSN) && - (card->options.layer2) && - (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))) { - QETH_DBF_TEXT(trace,4,"nomacadr"); - return -EPERM; - } - card->data.state = CH_STATE_UP; - card->state = CARD_STATE_UP; - card->dev->flags |= IFF_UP; - netif_start_queue(dev); - - if (!card->lan_online && netif_carrier_ok(dev)) - netif_carrier_off(dev); - return 0; -} - -static int -qeth_stop(struct net_device *dev) -{ - struct qeth_card *card; - - QETH_DBF_TEXT(trace, 4, "qethstop"); - - card = (struct qeth_card *) dev->priv; - - netif_tx_disable(dev); - card->dev->flags &= ~IFF_UP; - if (card->state == CARD_STATE_UP) - card->state = CARD_STATE_SOFTSETUP; - return 0; -} - -static int -qeth_get_cast_type(struct qeth_card *card, struct sk_buff *skb) -{ - int cast_type = RTN_UNSPEC; - - if (card->info.type == QETH_CARD_TYPE_OSN) - return cast_type; - - if (skb->dst && skb->dst->neighbour){ - cast_type = skb->dst->neighbour->type; - if ((cast_type == RTN_BROADCAST) || - (cast_type == RTN_MULTICAST) || - (cast_type == RTN_ANYCAST)) - return cast_type; - else - return RTN_UNSPEC; - } - /* try something else */ - if (skb->protocol == ETH_P_IPV6) - return (skb_network_header(skb)[24] == 0xff) ? - RTN_MULTICAST : 0; - else if (skb->protocol == ETH_P_IP) - return ((skb_network_header(skb)[16] & 0xf0) == 0xe0) ? - RTN_MULTICAST : 0; - /* ... */ - if (!memcmp(skb->data, skb->dev->broadcast, 6)) - return RTN_BROADCAST; - else { - u16 hdr_mac; - - hdr_mac = *((u16 *)skb->data); - /* tr multicast? */ - switch (card->info.link_type) { - case QETH_LINK_TYPE_HSTR: - case QETH_LINK_TYPE_LANE_TR: - if ((hdr_mac == QETH_TR_MAC_NC) || - (hdr_mac == QETH_TR_MAC_C)) - return RTN_MULTICAST; - break; - /* eth or so multicast? */ - default: - if ((hdr_mac == QETH_ETH_MAC_V4) || - (hdr_mac == QETH_ETH_MAC_V6)) - return RTN_MULTICAST; - } - } - return cast_type; -} - -static int -qeth_get_priority_queue(struct qeth_card *card, struct sk_buff *skb, - int ipv, int cast_type) -{ - if (!ipv && (card->info.type == QETH_CARD_TYPE_OSAE)) - return card->qdio.default_out_queue; - switch (card->qdio.no_out_queues) { - case 4: - if (cast_type && card->info.is_multicast_different) - return card->info.is_multicast_different & - (card->qdio.no_out_queues - 1); - if (card->qdio.do_prio_queueing && (ipv == 4)) { - const u8 tos = ip_hdr(skb)->tos; - - if (card->qdio.do_prio_queueing==QETH_PRIO_Q_ING_TOS){ - if (tos & IP_TOS_NOTIMPORTANT) - return 3; - if (tos & IP_TOS_HIGHRELIABILITY) - return 2; - if (tos & IP_TOS_HIGHTHROUGHPUT) - return 1; - if (tos & IP_TOS_LOWDELAY) - return 0; - } - if (card->qdio.do_prio_queueing==QETH_PRIO_Q_ING_PREC) - return 3 - (tos >> 6); - } else if (card->qdio.do_prio_queueing && (ipv == 6)) { - /* TODO: IPv6!!! */ - } - return card->qdio.default_out_queue; - case 1: /* fallthrough for single-out-queue 1920-device */ - default: - return card->qdio.default_out_queue; - } -} - -static inline int -qeth_get_ip_version(struct sk_buff *skb) -{ - switch (skb->protocol) { - case ETH_P_IPV6: - return 6; - case ETH_P_IP: - return 4; - default: - return 0; - } -} - -static struct qeth_hdr * -__qeth_prepare_skb(struct qeth_card *card, struct sk_buff *skb, int ipv) -{ -#ifdef CONFIG_QETH_VLAN - u16 *tag; - if (card->vlangrp && vlan_tx_tag_present(skb) && - ((ipv == 6) || card->options.layer2) ) { - /* - * Move the mac addresses (6 bytes src, 6 bytes dest) - * to the beginning of the new header. We are using three - * memcpys instead of one memmove to save cycles. - */ - skb_push(skb, VLAN_HLEN); - skb_copy_to_linear_data(skb, skb->data + 4, 4); - skb_copy_to_linear_data_offset(skb, 4, skb->data + 8, 4); - skb_copy_to_linear_data_offset(skb, 8, skb->data + 12, 4); - tag = (u16 *)(skb->data + 12); - /* - * first two bytes = ETH_P_8021Q (0x8100) - * second two bytes = VLANID - */ - *tag = __constant_htons(ETH_P_8021Q); - *(tag + 1) = htons(vlan_tx_tag_get(skb)); - } -#endif - return ((struct qeth_hdr *) - qeth_push_skb(card, skb, sizeof(struct qeth_hdr))); -} - -static void -__qeth_free_new_skb(struct sk_buff *orig_skb, struct sk_buff *new_skb) -{ - if (orig_skb != new_skb) - dev_kfree_skb_any(new_skb); -} - -static struct sk_buff * -qeth_prepare_skb(struct qeth_card *card, struct sk_buff *skb, - struct qeth_hdr **hdr, int ipv) -{ - struct sk_buff *new_skb, *new_skb2; - - QETH_DBF_TEXT(trace, 6, "prepskb"); - new_skb = skb; - new_skb = qeth_pskb_unshare(skb, GFP_ATOMIC); - if (!new_skb) - return NULL; - new_skb2 = qeth_realloc_headroom(card, new_skb, - sizeof(struct qeth_hdr)); - if (!new_skb2) { - __qeth_free_new_skb(skb, new_skb); - return NULL; - } - if (new_skb != skb) - __qeth_free_new_skb(new_skb2, new_skb); - new_skb = new_skb2; - *hdr = __qeth_prepare_skb(card, new_skb, ipv); - if (*hdr == NULL) { - __qeth_free_new_skb(skb, new_skb); - return NULL; - } - return new_skb; -} - -static inline u8 -qeth_get_qeth_hdr_flags4(int cast_type) -{ - if (cast_type == RTN_MULTICAST) - return QETH_CAST_MULTICAST; - if (cast_type == RTN_BROADCAST) - return QETH_CAST_BROADCAST; - return QETH_CAST_UNICAST; -} - -static inline u8 -qeth_get_qeth_hdr_flags6(int cast_type) -{ - u8 ct = QETH_HDR_PASSTHRU | QETH_HDR_IPV6; - if (cast_type == RTN_MULTICAST) - return ct | QETH_CAST_MULTICAST; - if (cast_type == RTN_ANYCAST) - return ct | QETH_CAST_ANYCAST; - if (cast_type == RTN_BROADCAST) - return ct | QETH_CAST_BROADCAST; - return ct | QETH_CAST_UNICAST; -} - -static void -qeth_layer2_get_packet_type(struct qeth_card *card, struct qeth_hdr *hdr, - struct sk_buff *skb) -{ - __u16 hdr_mac; - - if (!memcmp(skb->data+QETH_HEADER_SIZE, - skb->dev->broadcast,6)) { /* broadcast? */ - *(__u32 *)hdr->hdr.l2.flags |= - QETH_LAYER2_FLAG_BROADCAST << 8; - return; - } - hdr_mac=*((__u16*)skb->data); - /* tr multicast? */ - switch (card->info.link_type) { - case QETH_LINK_TYPE_HSTR: - case QETH_LINK_TYPE_LANE_TR: - if ((hdr_mac == QETH_TR_MAC_NC) || - (hdr_mac == QETH_TR_MAC_C) ) - *(__u32 *)hdr->hdr.l2.flags |= - QETH_LAYER2_FLAG_MULTICAST << 8; - else - *(__u32 *)hdr->hdr.l2.flags |= - QETH_LAYER2_FLAG_UNICAST << 8; - break; - /* eth or so multicast? */ - default: - if ( (hdr_mac==QETH_ETH_MAC_V4) || - (hdr_mac==QETH_ETH_MAC_V6) ) - *(__u32 *)hdr->hdr.l2.flags |= - QETH_LAYER2_FLAG_MULTICAST << 8; - else - *(__u32 *)hdr->hdr.l2.flags |= - QETH_LAYER2_FLAG_UNICAST << 8; - } -} - -static void -qeth_layer2_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, - struct sk_buff *skb, int cast_type) -{ - memset(hdr, 0, sizeof(struct qeth_hdr)); - hdr->hdr.l2.id = QETH_HEADER_TYPE_LAYER2; - - /* set byte 0 to "0x02" and byte 3 to casting flags */ - if (cast_type==RTN_MULTICAST) - *(__u32 *)hdr->hdr.l2.flags |= QETH_LAYER2_FLAG_MULTICAST << 8; - else if (cast_type==RTN_BROADCAST) - *(__u32 *)hdr->hdr.l2.flags |= QETH_LAYER2_FLAG_BROADCAST << 8; - else - qeth_layer2_get_packet_type(card, hdr, skb); - - hdr->hdr.l2.pkt_length = skb->len-QETH_HEADER_SIZE; -#ifdef CONFIG_QETH_VLAN - /* VSWITCH relies on the VLAN - * information to be present in - * the QDIO header */ - if ((card->vlangrp != NULL) && - vlan_tx_tag_present(skb)) { - *(__u32 *)hdr->hdr.l2.flags |= QETH_LAYER2_FLAG_VLAN << 8; - hdr->hdr.l2.vlan_id = vlan_tx_tag_get(skb); - } -#endif -} - -void -qeth_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, - struct sk_buff *skb, int ipv, int cast_type) -{ - QETH_DBF_TEXT(trace, 6, "fillhdr"); - - memset(hdr, 0, sizeof(struct qeth_hdr)); - if (card->options.layer2) { - qeth_layer2_fill_header(card, hdr, skb, cast_type); - return; - } - hdr->hdr.l3.id = QETH_HEADER_TYPE_LAYER3; - hdr->hdr.l3.ext_flags = 0; -#ifdef CONFIG_QETH_VLAN - /* - * before we're going to overwrite this location with next hop ip. - * v6 uses passthrough, v4 sets the tag in the QDIO header. - */ - if (card->vlangrp && vlan_tx_tag_present(skb)) { - hdr->hdr.l3.ext_flags = (ipv == 4) ? - QETH_HDR_EXT_VLAN_FRAME : - QETH_HDR_EXT_INCLUDE_VLAN_TAG; - hdr->hdr.l3.vlan_id = vlan_tx_tag_get(skb); - } -#endif /* CONFIG_QETH_VLAN */ - hdr->hdr.l3.length = skb->len - sizeof(struct qeth_hdr); - if (ipv == 4) { /* IPv4 */ - hdr->hdr.l3.flags = qeth_get_qeth_hdr_flags4(cast_type); - memset(hdr->hdr.l3.dest_addr, 0, 12); - if ((skb->dst) && (skb->dst->neighbour)) { - *((u32 *) (&hdr->hdr.l3.dest_addr[12])) = - *((u32 *) skb->dst->neighbour->primary_key); - } else { - /* fill in destination address used in ip header */ - *((u32 *)(&hdr->hdr.l3.dest_addr[12])) = - ip_hdr(skb)->daddr; - } - } else if (ipv == 6) { /* IPv6 or passthru */ - hdr->hdr.l3.flags = qeth_get_qeth_hdr_flags6(cast_type); - if ((skb->dst) && (skb->dst->neighbour)) { - memcpy(hdr->hdr.l3.dest_addr, - skb->dst->neighbour->primary_key, 16); - } else { - /* fill in destination address used in ip header */ - memcpy(hdr->hdr.l3.dest_addr, - &ipv6_hdr(skb)->daddr, 16); - } - } else { /* passthrough */ - if((skb->dev->type == ARPHRD_IEEE802_TR) && - !memcmp(skb->data + sizeof(struct qeth_hdr) + - sizeof(__u16), skb->dev->broadcast, 6)) { - hdr->hdr.l3.flags = QETH_CAST_BROADCAST | - QETH_HDR_PASSTHRU; - } else if (!memcmp(skb->data + sizeof(struct qeth_hdr), - skb->dev->broadcast, 6)) { /* broadcast? */ - hdr->hdr.l3.flags = QETH_CAST_BROADCAST | - QETH_HDR_PASSTHRU; - } else { - hdr->hdr.l3.flags = (cast_type == RTN_MULTICAST) ? - QETH_CAST_MULTICAST | QETH_HDR_PASSTHRU : - QETH_CAST_UNICAST | QETH_HDR_PASSTHRU; - } - } -} - -static void -__qeth_fill_buffer(struct sk_buff *skb, struct qdio_buffer *buffer, - int is_tso, int *next_element_to_fill) -{ - int length = skb->len; - int length_here; - int element; - char *data; - int first_lap ; - - element = *next_element_to_fill; - data = skb->data; - first_lap = (is_tso == 0 ? 1 : 0); - - while (length > 0) { - /* length_here is the remaining amount of data in this page */ - length_here = PAGE_SIZE - ((unsigned long) data % PAGE_SIZE); - if (length < length_here) - length_here = length; - - buffer->element[element].addr = data; - buffer->element[element].length = length_here; - length -= length_here; - if (!length) { - if (first_lap) - buffer->element[element].flags = 0; - else - buffer->element[element].flags = - SBAL_FLAGS_LAST_FRAG; - } else { - if (first_lap) - buffer->element[element].flags = - SBAL_FLAGS_FIRST_FRAG; - else - buffer->element[element].flags = - SBAL_FLAGS_MIDDLE_FRAG; - } - data += length_here; - element++; - first_lap = 0; - } - *next_element_to_fill = element; -} - -static int -qeth_fill_buffer(struct qeth_qdio_out_q *queue, - struct qeth_qdio_out_buffer *buf, - struct sk_buff *skb) -{ - struct qdio_buffer *buffer; - struct qeth_hdr_tso *hdr; - int flush_cnt = 0, hdr_len, large_send = 0; - - QETH_DBF_TEXT(trace, 6, "qdfillbf"); - - buffer = buf->buffer; - atomic_inc(&skb->users); - skb_queue_tail(&buf->skb_list, skb); - - hdr = (struct qeth_hdr_tso *) skb->data; - /*check first on TSO ....*/ - if (hdr->hdr.hdr.l3.id == QETH_HEADER_TYPE_TSO) { - int element = buf->next_element_to_fill; - - hdr_len = sizeof(struct qeth_hdr_tso) + hdr->ext.dg_hdr_len; - /*fill first buffer entry only with header information */ - buffer->element[element].addr = skb->data; - buffer->element[element].length = hdr_len; - buffer->element[element].flags = SBAL_FLAGS_FIRST_FRAG; - buf->next_element_to_fill++; - skb->data += hdr_len; - skb->len -= hdr_len; - large_send = 1; - } - if (skb_shinfo(skb)->nr_frags == 0) - __qeth_fill_buffer(skb, buffer, large_send, - (int *)&buf->next_element_to_fill); - else - __qeth_fill_buffer_frag(skb, buffer, large_send, - (int *)&buf->next_element_to_fill); - - if (!queue->do_pack) { - QETH_DBF_TEXT(trace, 6, "fillbfnp"); - /* set state to PRIMED -> will be flushed */ - atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); - flush_cnt = 1; - } else { - QETH_DBF_TEXT(trace, 6, "fillbfpa"); - if (queue->card->options.performance_stats) - queue->card->perf_stats.skbs_sent_pack++; - if (buf->next_element_to_fill >= - QETH_MAX_BUFFER_ELEMENTS(queue->card)) { - /* - * packed buffer if full -> set state PRIMED - * -> will be flushed - */ - atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); - flush_cnt = 1; - } - } - return flush_cnt; -} - -static int -qeth_do_send_packet_fast(struct qeth_card *card, struct qeth_qdio_out_q *queue, - struct sk_buff *skb, struct qeth_hdr *hdr, - int elements_needed, - struct qeth_eddp_context *ctx) -{ - struct qeth_qdio_out_buffer *buffer; - int buffers_needed = 0; - int flush_cnt = 0; - int index; - - QETH_DBF_TEXT(trace, 6, "dosndpfa"); - - /* spin until we get the queue ... */ - while (atomic_cmpxchg(&queue->state, QETH_OUT_Q_UNLOCKED, - QETH_OUT_Q_LOCKED) != QETH_OUT_Q_UNLOCKED); - /* ... now we've got the queue */ - index = queue->next_buf_to_fill; - buffer = &queue->bufs[queue->next_buf_to_fill]; - /* - * check if buffer is empty to make sure that we do not 'overtake' - * ourselves and try to fill a buffer that is already primed - */ - if (atomic_read(&buffer->state) != QETH_QDIO_BUF_EMPTY) - goto out; - if (ctx == NULL) - queue->next_buf_to_fill = (queue->next_buf_to_fill + 1) % - QDIO_MAX_BUFFERS_PER_Q; - else { - buffers_needed = qeth_eddp_check_buffers_for_context(queue,ctx); - if (buffers_needed < 0) - goto out; - queue->next_buf_to_fill = - (queue->next_buf_to_fill + buffers_needed) % - QDIO_MAX_BUFFERS_PER_Q; - } - atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); - if (ctx == NULL) { - qeth_fill_buffer(queue, buffer, skb); - qeth_flush_buffers(queue, 0, index, 1); - } else { - flush_cnt = qeth_eddp_fill_buffer(queue, ctx, index); - WARN_ON(buffers_needed != flush_cnt); - qeth_flush_buffers(queue, 0, index, flush_cnt); - } - return 0; -out: - atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); - return -EBUSY; -} - -static int -qeth_do_send_packet(struct qeth_card *card, struct qeth_qdio_out_q *queue, - struct sk_buff *skb, struct qeth_hdr *hdr, - int elements_needed, struct qeth_eddp_context *ctx) -{ - struct qeth_qdio_out_buffer *buffer; - int start_index; - int flush_count = 0; - int do_pack = 0; - int tmp; - int rc = 0; - - QETH_DBF_TEXT(trace, 6, "dosndpkt"); - - /* spin until we get the queue ... */ - while (atomic_cmpxchg(&queue->state, QETH_OUT_Q_UNLOCKED, - QETH_OUT_Q_LOCKED) != QETH_OUT_Q_UNLOCKED); - start_index = queue->next_buf_to_fill; - buffer = &queue->bufs[queue->next_buf_to_fill]; - /* - * check if buffer is empty to make sure that we do not 'overtake' - * ourselves and try to fill a buffer that is already primed - */ - if (atomic_read(&buffer->state) != QETH_QDIO_BUF_EMPTY) { - atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); - return -EBUSY; - } - /* check if we need to switch packing state of this queue */ - qeth_switch_to_packing_if_needed(queue); - if (queue->do_pack){ - do_pack = 1; - if (ctx == NULL) { - /* does packet fit in current buffer? */ - if((QETH_MAX_BUFFER_ELEMENTS(card) - - buffer->next_element_to_fill) < elements_needed){ - /* ... no -> set state PRIMED */ - atomic_set(&buffer->state,QETH_QDIO_BUF_PRIMED); - flush_count++; - queue->next_buf_to_fill = - (queue->next_buf_to_fill + 1) % - QDIO_MAX_BUFFERS_PER_Q; - buffer = &queue->bufs[queue->next_buf_to_fill]; - /* we did a step forward, so check buffer state - * again */ - if (atomic_read(&buffer->state) != - QETH_QDIO_BUF_EMPTY){ - qeth_flush_buffers(queue, 0, start_index, flush_count); - atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); - return -EBUSY; - } - } - } else { - /* check if we have enough elements (including following - * free buffers) to handle eddp context */ - if (qeth_eddp_check_buffers_for_context(queue,ctx) < 0){ - if (net_ratelimit()) - PRINT_WARN("eddp tx_dropped 1\n"); - rc = -EBUSY; - goto out; - } - } - } - if (ctx == NULL) - tmp = qeth_fill_buffer(queue, buffer, skb); - else { - tmp = qeth_eddp_fill_buffer(queue,ctx,queue->next_buf_to_fill); - if (tmp < 0) { - printk("eddp tx_dropped 2\n"); - rc = - EBUSY; - goto out; - } - } - queue->next_buf_to_fill = (queue->next_buf_to_fill + tmp) % - QDIO_MAX_BUFFERS_PER_Q; - flush_count += tmp; -out: - if (flush_count) - qeth_flush_buffers(queue, 0, start_index, flush_count); - else if (!atomic_read(&queue->set_pci_flags_count)) - atomic_xchg(&queue->state, QETH_OUT_Q_LOCKED_FLUSH); - /* - * queue->state will go from LOCKED -> UNLOCKED or from - * LOCKED_FLUSH -> LOCKED if output_handler wanted to 'notify' us - * (switch packing state or flush buffer to get another pci flag out). - * In that case we will enter this loop - */ - while (atomic_dec_return(&queue->state)){ - flush_count = 0; - start_index = queue->next_buf_to_fill; - /* check if we can go back to non-packing state */ - flush_count += qeth_switch_to_nonpacking_if_needed(queue); - /* - * check if we need to flush a packing buffer to get a pci - * flag out on the queue - */ - if (!flush_count && !atomic_read(&queue->set_pci_flags_count)) - flush_count += qeth_flush_buffers_on_no_pci(queue); - if (flush_count) - qeth_flush_buffers(queue, 0, start_index, flush_count); - } - /* at this point the queue is UNLOCKED again */ - if (queue->card->options.performance_stats && do_pack) - queue->card->perf_stats.bufs_sent_pack += flush_count; - - return rc; -} - -static int -qeth_get_elements_no(struct qeth_card *card, void *hdr, - struct sk_buff *skb, int elems) -{ - int elements_needed = 0; - - if (skb_shinfo(skb)->nr_frags > 0) - elements_needed = (skb_shinfo(skb)->nr_frags + 1); - if (elements_needed == 0) - elements_needed = 1 + (((((unsigned long) hdr) % PAGE_SIZE) - + skb->len) >> PAGE_SHIFT); - if ((elements_needed + elems) > QETH_MAX_BUFFER_ELEMENTS(card)){ - PRINT_ERR("Invalid size of IP packet " - "(Number=%d / Length=%d). Discarded.\n", - (elements_needed+elems), skb->len); - return 0; - } - return elements_needed; -} - -static void qeth_tx_csum(struct sk_buff *skb) -{ - int tlen; - - if (skb->protocol == htons(ETH_P_IP)) { - tlen = ntohs(ip_hdr(skb)->tot_len) - (ip_hdr(skb)->ihl << 2); - switch (ip_hdr(skb)->protocol) { - case IPPROTO_TCP: - tcp_hdr(skb)->check = 0; - tcp_hdr(skb)->check = csum_tcpudp_magic( - ip_hdr(skb)->saddr, ip_hdr(skb)->daddr, - tlen, ip_hdr(skb)->protocol, - skb_checksum(skb, skb_transport_offset(skb), - tlen, 0)); - break; - case IPPROTO_UDP: - udp_hdr(skb)->check = 0; - udp_hdr(skb)->check = csum_tcpudp_magic( - ip_hdr(skb)->saddr, ip_hdr(skb)->daddr, - tlen, ip_hdr(skb)->protocol, - skb_checksum(skb, skb_transport_offset(skb), - tlen, 0)); - break; - } - } else if (skb->protocol == htons(ETH_P_IPV6)) { - switch (ipv6_hdr(skb)->nexthdr) { - case IPPROTO_TCP: - tcp_hdr(skb)->check = 0; - tcp_hdr(skb)->check = csum_ipv6_magic( - &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr, - ipv6_hdr(skb)->payload_len, - ipv6_hdr(skb)->nexthdr, - skb_checksum(skb, skb_transport_offset(skb), - ipv6_hdr(skb)->payload_len, 0)); - break; - case IPPROTO_UDP: - udp_hdr(skb)->check = 0; - udp_hdr(skb)->check = csum_ipv6_magic( - &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr, - ipv6_hdr(skb)->payload_len, - ipv6_hdr(skb)->nexthdr, - skb_checksum(skb, skb_transport_offset(skb), - ipv6_hdr(skb)->payload_len, 0)); - break; - } - } -} - -static int -qeth_send_packet(struct qeth_card *card, struct sk_buff *skb) -{ - int ipv = 0; - int cast_type; - struct qeth_qdio_out_q *queue; - struct qeth_hdr *hdr = NULL; - int elements_needed = 0; - enum qeth_large_send_types large_send = QETH_LARGE_SEND_NO; - struct qeth_eddp_context *ctx = NULL; - int tx_bytes = skb->len; - unsigned short nr_frags = skb_shinfo(skb)->nr_frags; - unsigned short tso_size = skb_shinfo(skb)->gso_size; - struct sk_buff *new_skb, *new_skb2; - int rc; - - QETH_DBF_TEXT(trace, 6, "sendpkt"); - - new_skb = skb; - if ((card->info.type == QETH_CARD_TYPE_OSN) && - (skb->protocol == htons(ETH_P_IPV6))) - return -EPERM; - cast_type = qeth_get_cast_type(card, skb); - if ((cast_type == RTN_BROADCAST) && - (card->info.broadcast_capable == 0)) - return -EPERM; - queue = card->qdio.out_qs - [qeth_get_priority_queue(card, skb, ipv, cast_type)]; - if (!card->options.layer2) { - ipv = qeth_get_ip_version(skb); - if ((card->dev->header_ops == &qeth_fake_ops) && ipv) { - new_skb = qeth_pskb_unshare(skb, GFP_ATOMIC); - if (!new_skb) - return -ENOMEM; - if(card->dev->type == ARPHRD_IEEE802_TR){ - skb_pull(new_skb, QETH_FAKE_LL_LEN_TR); - } else { - skb_pull(new_skb, QETH_FAKE_LL_LEN_ETH); - } - } - } - if (skb_is_gso(skb)) - large_send = card->options.large_send; - /* check on OSN device*/ - if (card->info.type == QETH_CARD_TYPE_OSN) - hdr = (struct qeth_hdr *)new_skb->data; - /*are we able to do TSO ? */ - if ((large_send == QETH_LARGE_SEND_TSO) && - (cast_type == RTN_UNSPEC)) { - rc = qeth_tso_prepare_packet(card, new_skb, ipv, cast_type); - if (rc) { - __qeth_free_new_skb(skb, new_skb); - return rc; - } - elements_needed++; - } else if (card->info.type != QETH_CARD_TYPE_OSN) { - new_skb2 = qeth_prepare_skb(card, new_skb, &hdr, ipv); - if (!new_skb2) { - __qeth_free_new_skb(skb, new_skb); - return -EINVAL; - } - if (new_skb != skb) - __qeth_free_new_skb(new_skb2, new_skb); - new_skb = new_skb2; - qeth_fill_header(card, hdr, new_skb, ipv, cast_type); - } - if (large_send == QETH_LARGE_SEND_EDDP) { - ctx = qeth_eddp_create_context(card, new_skb, hdr, - skb->sk->sk_protocol); - if (ctx == NULL) { - __qeth_free_new_skb(skb, new_skb); - PRINT_WARN("could not create eddp context\n"); - return -EINVAL; - } - } else { - int elems = qeth_get_elements_no(card,(void*) hdr, new_skb, - elements_needed); - if (!elems) { - __qeth_free_new_skb(skb, new_skb); - return -EINVAL; - } - elements_needed += elems; - } - - if ((large_send == QETH_LARGE_SEND_NO) && - (skb->ip_summed == CHECKSUM_PARTIAL)) - qeth_tx_csum(new_skb); - - if (card->info.type != QETH_CARD_TYPE_IQD) - rc = qeth_do_send_packet(card, queue, new_skb, hdr, - elements_needed, ctx); - else { - if ((!card->options.layer2) && - (ipv == 0)) { - __qeth_free_new_skb(skb, new_skb); - return -EPERM; - } - rc = qeth_do_send_packet_fast(card, queue, new_skb, hdr, - elements_needed, ctx); - } - if (!rc) { - card->stats.tx_packets++; - card->stats.tx_bytes += tx_bytes; - if (new_skb != skb) - dev_kfree_skb_any(skb); - if (card->options.performance_stats) { - if (tso_size && - !(large_send == QETH_LARGE_SEND_NO)) { - card->perf_stats.large_send_bytes += tx_bytes; - card->perf_stats.large_send_cnt++; - } - if (nr_frags > 0) { - card->perf_stats.sg_skbs_sent++; - /* nr_frags + skb->data */ - card->perf_stats.sg_frags_sent += - nr_frags + 1; - } - } - } else { - card->stats.tx_dropped++; - __qeth_free_new_skb(skb, new_skb); - } - if (ctx != NULL) { - /* drop creator's reference */ - qeth_eddp_put_context(ctx); - /* free skb; it's not referenced by a buffer */ - if (!rc) - dev_kfree_skb_any(new_skb); - } - return rc; -} - -static int -qeth_mdio_read(struct net_device *dev, int phy_id, int regnum) -{ - struct qeth_card *card = (struct qeth_card *) dev->priv; - int rc = 0; - - switch(regnum){ - case MII_BMCR: /* Basic mode control register */ - rc = BMCR_FULLDPLX; - if ((card->info.link_type != QETH_LINK_TYPE_GBIT_ETH)&& - (card->info.link_type != QETH_LINK_TYPE_OSN) && - (card->info.link_type != QETH_LINK_TYPE_10GBIT_ETH)) - rc |= BMCR_SPEED100; - break; - case MII_BMSR: /* Basic mode status register */ - rc = BMSR_ERCAP | BMSR_ANEGCOMPLETE | BMSR_LSTATUS | - BMSR_10HALF | BMSR_10FULL | BMSR_100HALF | BMSR_100FULL | - BMSR_100BASE4; - break; - case MII_PHYSID1: /* PHYS ID 1 */ - rc = (dev->dev_addr[0] << 16) | (dev->dev_addr[1] << 8) | - dev->dev_addr[2]; - rc = (rc >> 5) & 0xFFFF; - break; - case MII_PHYSID2: /* PHYS ID 2 */ - rc = (dev->dev_addr[2] << 10) & 0xFFFF; - break; - case MII_ADVERTISE: /* Advertisement control reg */ - rc = ADVERTISE_ALL; - break; - case MII_LPA: /* Link partner ability reg */ - rc = LPA_10HALF | LPA_10FULL | LPA_100HALF | LPA_100FULL | - LPA_100BASE4 | LPA_LPACK; - break; - case MII_EXPANSION: /* Expansion register */ - break; - case MII_DCOUNTER: /* disconnect counter */ - break; - case MII_FCSCOUNTER: /* false carrier counter */ - break; - case MII_NWAYTEST: /* N-way auto-neg test register */ - break; - case MII_RERRCOUNTER: /* rx error counter */ - rc = card->stats.rx_errors; - break; - case MII_SREVISION: /* silicon revision */ - break; - case MII_RESV1: /* reserved 1 */ - break; - case MII_LBRERROR: /* loopback, rx, bypass error */ - break; - case MII_PHYADDR: /* physical address */ - break; - case MII_RESV2: /* reserved 2 */ - break; - case MII_TPISTATUS: /* TPI status for 10mbps */ - break; - case MII_NCONFIG: /* network interface config */ - break; - default: - break; - } - return rc; -} - - -static const char * -qeth_arp_get_error_cause(int *rc) -{ - switch (*rc) { - case QETH_IPA_ARP_RC_FAILED: - *rc = -EIO; - return "operation failed"; - case QETH_IPA_ARP_RC_NOTSUPP: - *rc = -EOPNOTSUPP; - return "operation not supported"; - case QETH_IPA_ARP_RC_OUT_OF_RANGE: - *rc = -EINVAL; - return "argument out of range"; - case QETH_IPA_ARP_RC_Q_NOTSUPP: - *rc = -EOPNOTSUPP; - return "query operation not supported"; - case QETH_IPA_ARP_RC_Q_NO_DATA: - *rc = -ENOENT; - return "no query data available"; - default: - return "unknown error"; - } -} - -static int -qeth_send_simple_setassparms(struct qeth_card *, enum qeth_ipa_funcs, - __u16, long); - -static int -qeth_arp_set_no_entries(struct qeth_card *card, int no_entries) -{ - int tmp; - int rc; - - QETH_DBF_TEXT(trace,3,"arpstnoe"); - - /* - * currently GuestLAN only supports the ARP assist function - * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_SET_NO_ENTRIES; - * thus we say EOPNOTSUPP for this ARP function - */ - if (card->info.guestlan) - return -EOPNOTSUPP; - if (!qeth_is_supported(card,IPA_ARP_PROCESSING)) { - PRINT_WARN("ARP processing not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING, - IPA_CMD_ASS_ARP_SET_NO_ENTRIES, - no_entries); - if (rc) { - tmp = rc; - PRINT_WARN("Could not set number of ARP entries on %s: " - "%s (0x%x/%d)\n", - QETH_CARD_IFNAME(card), qeth_arp_get_error_cause(&rc), - tmp, tmp); - } - return rc; -} - -static void -qeth_copy_arp_entries_stripped(struct qeth_arp_query_info *qinfo, - struct qeth_arp_query_data *qdata, - int entry_size, int uentry_size) -{ - char *entry_ptr; - char *uentry_ptr; - int i; - - entry_ptr = (char *)&qdata->data; - uentry_ptr = (char *)(qinfo->udata + qinfo->udata_offset); - for (i = 0; i < qdata->no_entries; ++i){ - /* strip off 32 bytes "media specific information" */ - memcpy(uentry_ptr, (entry_ptr + 32), entry_size - 32); - entry_ptr += entry_size; - uentry_ptr += uentry_size; - } -} - -static int -qeth_arp_query_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - struct qeth_arp_query_data *qdata; - struct qeth_arp_query_info *qinfo; - int entry_size; - int uentry_size; - int i; - - QETH_DBF_TEXT(trace,4,"arpquecb"); - - qinfo = (struct qeth_arp_query_info *) reply->param; - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace,4,"qaer1%i", cmd->hdr.return_code); - return 0; - } - if (cmd->data.setassparms.hdr.return_code) { - cmd->hdr.return_code = cmd->data.setassparms.hdr.return_code; - QETH_DBF_TEXT_(trace,4,"qaer2%i", cmd->hdr.return_code); - return 0; - } - qdata = &cmd->data.setassparms.data.query_arp; - switch(qdata->reply_bits){ - case 5: - uentry_size = entry_size = sizeof(struct qeth_arp_qi_entry5); - if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) - uentry_size = sizeof(struct qeth_arp_qi_entry5_short); - break; - case 7: - /* fall through to default */ - default: - /* tr is the same as eth -> entry7 */ - uentry_size = entry_size = sizeof(struct qeth_arp_qi_entry7); - if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) - uentry_size = sizeof(struct qeth_arp_qi_entry7_short); - break; - } - /* check if there is enough room in userspace */ - if ((qinfo->udata_len - qinfo->udata_offset) < - qdata->no_entries * uentry_size){ - QETH_DBF_TEXT_(trace, 4, "qaer3%i", -ENOMEM); - cmd->hdr.return_code = -ENOMEM; - PRINT_WARN("query ARP user space buffer is too small for " - "the returned number of ARP entries. " - "Aborting query!\n"); - goto out_error; - } - QETH_DBF_TEXT_(trace, 4, "anore%i", - cmd->data.setassparms.hdr.number_of_replies); - QETH_DBF_TEXT_(trace, 4, "aseqn%i", cmd->data.setassparms.hdr.seq_no); - QETH_DBF_TEXT_(trace, 4, "anoen%i", qdata->no_entries); - - if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) { - /* strip off "media specific information" */ - qeth_copy_arp_entries_stripped(qinfo, qdata, entry_size, - uentry_size); - } else - /*copy entries to user buffer*/ - memcpy(qinfo->udata + qinfo->udata_offset, - (char *)&qdata->data, qdata->no_entries*uentry_size); - - qinfo->no_entries += qdata->no_entries; - qinfo->udata_offset += (qdata->no_entries*uentry_size); - /* check if all replies received ... */ - if (cmd->data.setassparms.hdr.seq_no < - cmd->data.setassparms.hdr.number_of_replies) - return 1; - memcpy(qinfo->udata, &qinfo->no_entries, 4); - /* keep STRIP_ENTRIES flag so the user program can distinguish - * stripped entries from normal ones */ - if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) - qdata->reply_bits |= QETH_QARP_STRIP_ENTRIES; - memcpy(qinfo->udata + QETH_QARP_MASK_OFFSET,&qdata->reply_bits,2); - return 0; -out_error: - i = 0; - memcpy(qinfo->udata, &i, 4); - return 0; -} - -static int -qeth_send_ipa_arp_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, - int len, int (*reply_cb)(struct qeth_card *, - struct qeth_reply *, - unsigned long), - void *reply_param) -{ - QETH_DBF_TEXT(trace,4,"sendarp"); - - memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); - memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); - return qeth_send_control_data(card, IPA_PDU_HEADER_SIZE + len, iob, - reply_cb, reply_param); -} - -static int -qeth_send_ipa_snmp_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, - int len, int (*reply_cb)(struct qeth_card *, - struct qeth_reply *, - unsigned long), - void *reply_param) -{ - u16 s1, s2; - - QETH_DBF_TEXT(trace,4,"sendsnmp"); - - memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); - memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), - &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); - /* adjust PDU length fields in IPA_PDU_HEADER */ - s1 = (u32) IPA_PDU_HEADER_SIZE + len; - s2 = (u32) len; - memcpy(QETH_IPA_PDU_LEN_TOTAL(iob->data), &s1, 2); - memcpy(QETH_IPA_PDU_LEN_PDU1(iob->data), &s2, 2); - memcpy(QETH_IPA_PDU_LEN_PDU2(iob->data), &s2, 2); - memcpy(QETH_IPA_PDU_LEN_PDU3(iob->data), &s2, 2); - return qeth_send_control_data(card, IPA_PDU_HEADER_SIZE + len, iob, - reply_cb, reply_param); -} - -static struct qeth_cmd_buffer * -qeth_get_setassparms_cmd(struct qeth_card *, enum qeth_ipa_funcs, - __u16, __u16, enum qeth_prot_versions); -static int -qeth_arp_query(struct qeth_card *card, char __user *udata) -{ - struct qeth_cmd_buffer *iob; - struct qeth_arp_query_info qinfo = {0, }; - int tmp; - int rc; - - QETH_DBF_TEXT(trace,3,"arpquery"); - - if (!qeth_is_supported(card,/*IPA_QUERY_ARP_ADDR_INFO*/ - IPA_ARP_PROCESSING)) { - PRINT_WARN("ARP processing not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - /* get size of userspace buffer and mask_bits -> 6 bytes */ - if (copy_from_user(&qinfo, udata, 6)) - return -EFAULT; - if (!(qinfo.udata = kzalloc(qinfo.udata_len, GFP_KERNEL))) - return -ENOMEM; - qinfo.udata_offset = QETH_QARP_ENTRIES_OFFSET; - iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, - IPA_CMD_ASS_ARP_QUERY_INFO, - sizeof(int),QETH_PROT_IPV4); - - rc = qeth_send_ipa_arp_cmd(card, iob, - QETH_SETASS_BASE_LEN+QETH_ARP_CMD_LEN, - qeth_arp_query_cb, (void *)&qinfo); - if (rc) { - tmp = rc; - PRINT_WARN("Error while querying ARP cache on %s: %s " - "(0x%x/%d)\n", - QETH_CARD_IFNAME(card), qeth_arp_get_error_cause(&rc), - tmp, tmp); - if (copy_to_user(udata, qinfo.udata, 4)) - rc = -EFAULT; - } else { - if (copy_to_user(udata, qinfo.udata, qinfo.udata_len)) - rc = -EFAULT; - } - kfree(qinfo.udata); - return rc; -} - -/** - * SNMP command callback - */ -static int -qeth_snmp_command_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long sdata) -{ - struct qeth_ipa_cmd *cmd; - struct qeth_arp_query_info *qinfo; - struct qeth_snmp_cmd *snmp; - unsigned char *data; - __u16 data_len; - - QETH_DBF_TEXT(trace,3,"snpcmdcb"); - - cmd = (struct qeth_ipa_cmd *) sdata; - data = (unsigned char *)((char *)cmd - reply->offset); - qinfo = (struct qeth_arp_query_info *) reply->param; - snmp = &cmd->data.setadapterparms.data.snmp; - - if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace,4,"scer1%i", cmd->hdr.return_code); - return 0; - } - if (cmd->data.setadapterparms.hdr.return_code) { - cmd->hdr.return_code = cmd->data.setadapterparms.hdr.return_code; - QETH_DBF_TEXT_(trace,4,"scer2%i", cmd->hdr.return_code); - return 0; - } - data_len = *((__u16*)QETH_IPA_PDU_LEN_PDU1(data)); - if (cmd->data.setadapterparms.hdr.seq_no == 1) - data_len -= (__u16)((char *)&snmp->data - (char *)cmd); - else - data_len -= (__u16)((char*)&snmp->request - (char *)cmd); - - /* check if there is enough room in userspace */ - if ((qinfo->udata_len - qinfo->udata_offset) < data_len) { - QETH_DBF_TEXT_(trace, 4, "scer3%i", -ENOMEM); - cmd->hdr.return_code = -ENOMEM; - return 0; - } - QETH_DBF_TEXT_(trace, 4, "snore%i", - cmd->data.setadapterparms.hdr.used_total); - QETH_DBF_TEXT_(trace, 4, "sseqn%i", cmd->data.setadapterparms.hdr.seq_no); - /*copy entries to user buffer*/ - if (cmd->data.setadapterparms.hdr.seq_no == 1) { - memcpy(qinfo->udata + qinfo->udata_offset, - (char *)snmp, - data_len + offsetof(struct qeth_snmp_cmd,data)); - qinfo->udata_offset += offsetof(struct qeth_snmp_cmd, data); - } else { - memcpy(qinfo->udata + qinfo->udata_offset, - (char *)&snmp->request, data_len); - } - qinfo->udata_offset += data_len; - /* check if all replies received ... */ - QETH_DBF_TEXT_(trace, 4, "srtot%i", - cmd->data.setadapterparms.hdr.used_total); - QETH_DBF_TEXT_(trace, 4, "srseq%i", - cmd->data.setadapterparms.hdr.seq_no); - if (cmd->data.setadapterparms.hdr.seq_no < - cmd->data.setadapterparms.hdr.used_total) - return 1; - return 0; -} - -static struct qeth_cmd_buffer * -qeth_get_ipacmd_buffer(struct qeth_card *, enum qeth_ipa_cmds, - enum qeth_prot_versions ); - -static struct qeth_cmd_buffer * -qeth_get_adapter_cmd(struct qeth_card *card, __u32 command, __u32 cmdlen) -{ - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - iob = qeth_get_ipacmd_buffer(card,IPA_CMD_SETADAPTERPARMS, - QETH_PROT_IPV4); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setadapterparms.hdr.cmdlength = cmdlen; - cmd->data.setadapterparms.hdr.command_code = command; - cmd->data.setadapterparms.hdr.used_total = 1; - cmd->data.setadapterparms.hdr.seq_no = 1; - - return iob; -} - -/** - * function to send SNMP commands to OSA-E card - */ -static int -qeth_snmp_command(struct qeth_card *card, char __user *udata) -{ - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - struct qeth_snmp_ureq *ureq; - int req_len; - struct qeth_arp_query_info qinfo = {0, }; - int rc = 0; - - QETH_DBF_TEXT(trace,3,"snmpcmd"); - - if (card->info.guestlan) - return -EOPNOTSUPP; - - if ((!qeth_adp_supported(card,IPA_SETADP_SET_SNMP_CONTROL)) && - (!card->options.layer2) ) { - PRINT_WARN("SNMP Query MIBS not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - /* skip 4 bytes (data_len struct member) to get req_len */ - if (copy_from_user(&req_len, udata + sizeof(int), sizeof(int))) - return -EFAULT; - ureq = kmalloc(req_len+sizeof(struct qeth_snmp_ureq_hdr), GFP_KERNEL); - if (!ureq) { - QETH_DBF_TEXT(trace, 2, "snmpnome"); - return -ENOMEM; - } - if (copy_from_user(ureq, udata, - req_len+sizeof(struct qeth_snmp_ureq_hdr))){ - kfree(ureq); - return -EFAULT; - } - qinfo.udata_len = ureq->hdr.data_len; - if (!(qinfo.udata = kzalloc(qinfo.udata_len, GFP_KERNEL))){ - kfree(ureq); - return -ENOMEM; - } - qinfo.udata_offset = sizeof(struct qeth_snmp_ureq_hdr); - - iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_SNMP_CONTROL, - QETH_SNMP_SETADP_CMDLENGTH + req_len); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - memcpy(&cmd->data.setadapterparms.data.snmp, &ureq->cmd, req_len); - rc = qeth_send_ipa_snmp_cmd(card, iob, QETH_SETADP_BASE_LEN + req_len, - qeth_snmp_command_cb, (void *)&qinfo); - if (rc) - PRINT_WARN("SNMP command failed on %s: (0x%x)\n", - QETH_CARD_IFNAME(card), rc); - else { - if (copy_to_user(udata, qinfo.udata, qinfo.udata_len)) - rc = -EFAULT; - } - - kfree(ureq); - kfree(qinfo.udata); - return rc; -} - -static int -qeth_default_setassparms_cb(struct qeth_card *, struct qeth_reply *, - unsigned long); - -static int -qeth_default_setadapterparms_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data); -static int -qeth_send_setassparms(struct qeth_card *, struct qeth_cmd_buffer *, - __u16, long, - int (*reply_cb) - (struct qeth_card *, struct qeth_reply *, unsigned long), - void *reply_param); - -static int -qeth_arp_add_entry(struct qeth_card *card, struct qeth_arp_cache_entry *entry) -{ - struct qeth_cmd_buffer *iob; - char buf[16]; - int tmp; - int rc; - - QETH_DBF_TEXT(trace,3,"arpadent"); - - /* - * currently GuestLAN only supports the ARP assist function - * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_ADD_ENTRY; - * thus we say EOPNOTSUPP for this ARP function - */ - if (card->info.guestlan) - return -EOPNOTSUPP; - if (!qeth_is_supported(card,IPA_ARP_PROCESSING)) { - PRINT_WARN("ARP processing not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - - iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, - IPA_CMD_ASS_ARP_ADD_ENTRY, - sizeof(struct qeth_arp_cache_entry), - QETH_PROT_IPV4); - rc = qeth_send_setassparms(card, iob, - sizeof(struct qeth_arp_cache_entry), - (unsigned long) entry, - qeth_default_setassparms_cb, NULL); - if (rc) { - tmp = rc; - qeth_ipaddr4_to_string((u8 *)entry->ipaddr, buf); - PRINT_WARN("Could not add ARP entry for address %s on %s: " - "%s (0x%x/%d)\n", - buf, QETH_CARD_IFNAME(card), - qeth_arp_get_error_cause(&rc), tmp, tmp); - } - return rc; -} - -static int -qeth_arp_remove_entry(struct qeth_card *card, struct qeth_arp_cache_entry *entry) -{ - struct qeth_cmd_buffer *iob; - char buf[16] = {0, }; - int tmp; - int rc; - - QETH_DBF_TEXT(trace,3,"arprment"); - - /* - * currently GuestLAN only supports the ARP assist function - * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_REMOVE_ENTRY; - * thus we say EOPNOTSUPP for this ARP function - */ - if (card->info.guestlan) - return -EOPNOTSUPP; - if (!qeth_is_supported(card,IPA_ARP_PROCESSING)) { - PRINT_WARN("ARP processing not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - memcpy(buf, entry, 12); - iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, - IPA_CMD_ASS_ARP_REMOVE_ENTRY, - 12, - QETH_PROT_IPV4); - rc = qeth_send_setassparms(card, iob, - 12, (unsigned long)buf, - qeth_default_setassparms_cb, NULL); - if (rc) { - tmp = rc; - memset(buf, 0, 16); - qeth_ipaddr4_to_string((u8 *)entry->ipaddr, buf); - PRINT_WARN("Could not delete ARP entry for address %s on %s: " - "%s (0x%x/%d)\n", - buf, QETH_CARD_IFNAME(card), - qeth_arp_get_error_cause(&rc), tmp, tmp); - } - return rc; -} - -static int -qeth_arp_flush_cache(struct qeth_card *card) -{ - int rc; - int tmp; - - QETH_DBF_TEXT(trace,3,"arpflush"); - - /* - * currently GuestLAN only supports the ARP assist function - * IPA_CMD_ASS_ARP_QUERY_INFO, but not IPA_CMD_ASS_ARP_FLUSH_CACHE; - * thus we say EOPNOTSUPP for this ARP function - */ - if (card->info.guestlan || (card->info.type == QETH_CARD_TYPE_IQD)) - return -EOPNOTSUPP; - if (!qeth_is_supported(card,IPA_ARP_PROCESSING)) { - PRINT_WARN("ARP processing not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING, - IPA_CMD_ASS_ARP_FLUSH_CACHE, 0); - if (rc){ - tmp = rc; - PRINT_WARN("Could not flush ARP cache on %s: %s (0x%x/%d)\n", - QETH_CARD_IFNAME(card), qeth_arp_get_error_cause(&rc), - tmp, tmp); - } - return rc; -} - -static int -qeth_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - struct qeth_arp_cache_entry arp_entry; - struct mii_ioctl_data *mii_data; - int rc = 0; - - if (!card) - return -ENODEV; - - if ((card->state != CARD_STATE_UP) && - (card->state != CARD_STATE_SOFTSETUP)) - return -ENODEV; - - if (card->info.type == QETH_CARD_TYPE_OSN) - return -EPERM; - - switch (cmd){ - case SIOC_QETH_ARP_SET_NO_ENTRIES: - if ( !capable(CAP_NET_ADMIN) || - (card->options.layer2) ) { - rc = -EPERM; - break; - } - rc = qeth_arp_set_no_entries(card, rq->ifr_ifru.ifru_ivalue); - break; - case SIOC_QETH_ARP_QUERY_INFO: - if ( !capable(CAP_NET_ADMIN) || - (card->options.layer2) ) { - rc = -EPERM; - break; - } - rc = qeth_arp_query(card, rq->ifr_ifru.ifru_data); - break; - case SIOC_QETH_ARP_ADD_ENTRY: - if ( !capable(CAP_NET_ADMIN) || - (card->options.layer2) ) { - rc = -EPERM; - break; - } - if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data, - sizeof(struct qeth_arp_cache_entry))) - rc = -EFAULT; - else - rc = qeth_arp_add_entry(card, &arp_entry); - break; - case SIOC_QETH_ARP_REMOVE_ENTRY: - if ( !capable(CAP_NET_ADMIN) || - (card->options.layer2) ) { - rc = -EPERM; - break; - } - if (copy_from_user(&arp_entry, rq->ifr_ifru.ifru_data, - sizeof(struct qeth_arp_cache_entry))) - rc = -EFAULT; - else - rc = qeth_arp_remove_entry(card, &arp_entry); - break; - case SIOC_QETH_ARP_FLUSH_CACHE: - if ( !capable(CAP_NET_ADMIN) || - (card->options.layer2) ) { - rc = -EPERM; - break; - } - rc = qeth_arp_flush_cache(card); - break; - case SIOC_QETH_ADP_SET_SNMP_CONTROL: - rc = qeth_snmp_command(card, rq->ifr_ifru.ifru_data); - break; - case SIOC_QETH_GET_CARD_TYPE: - if ((card->info.type == QETH_CARD_TYPE_OSAE) && - !card->info.guestlan) - return 1; - return 0; - break; - case SIOCGMIIPHY: - mii_data = if_mii(rq); - mii_data->phy_id = 0; - break; - case SIOCGMIIREG: - mii_data = if_mii(rq); - if (mii_data->phy_id != 0) - rc = -EINVAL; - else - mii_data->val_out = qeth_mdio_read(dev,mii_data->phy_id, - mii_data->reg_num); - break; - default: - rc = -EOPNOTSUPP; - } - if (rc) - QETH_DBF_TEXT_(trace, 2, "ioce%d", rc); - return rc; -} - -static struct net_device_stats * -qeth_get_stats(struct net_device *dev) -{ - struct qeth_card *card; - - card = (struct qeth_card *) (dev->priv); - - QETH_DBF_TEXT(trace,5,"getstat"); - - return &card->stats; -} - -static int -qeth_change_mtu(struct net_device *dev, int new_mtu) -{ - struct qeth_card *card; - char dbf_text[15]; - - card = (struct qeth_card *) (dev->priv); - - QETH_DBF_TEXT(trace,4,"chgmtu"); - sprintf(dbf_text, "%8x", new_mtu); - QETH_DBF_TEXT(trace,4,dbf_text); - - if (new_mtu < 64) - return -EINVAL; - if (new_mtu > 65535) - return -EINVAL; - if ((!qeth_is_supported(card,IPA_IP_FRAGMENTATION)) && - (!qeth_mtu_is_valid(card, new_mtu))) - return -EINVAL; - dev->mtu = new_mtu; - return 0; -} - -#ifdef CONFIG_QETH_VLAN -static void -qeth_vlan_rx_register(struct net_device *dev, struct vlan_group *grp) -{ - struct qeth_card *card; - unsigned long flags; - - QETH_DBF_TEXT(trace,4,"vlanreg"); - - card = (struct qeth_card *) dev->priv; - spin_lock_irqsave(&card->vlanlock, flags); - card->vlangrp = grp; - spin_unlock_irqrestore(&card->vlanlock, flags); -} - -static void -qeth_free_vlan_buffer(struct qeth_card *card, struct qeth_qdio_out_buffer *buf, - unsigned short vid) -{ - int i; - struct sk_buff *skb; - struct sk_buff_head tmp_list; - - skb_queue_head_init(&tmp_list); - lockdep_set_class(&tmp_list.lock, &qdio_out_skb_queue_key); - for(i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i){ - while ((skb = skb_dequeue(&buf->skb_list))){ - if (vlan_tx_tag_present(skb) && - (vlan_tx_tag_get(skb) == vid)) { - atomic_dec(&skb->users); - dev_kfree_skb(skb); - } else - skb_queue_tail(&tmp_list, skb); - } - } - while ((skb = skb_dequeue(&tmp_list))) - skb_queue_tail(&buf->skb_list, skb); -} - -static void -qeth_free_vlan_skbs(struct qeth_card *card, unsigned short vid) -{ - int i, j; - - QETH_DBF_TEXT(trace, 4, "frvlskbs"); - for (i = 0; i < card->qdio.no_out_queues; ++i){ - for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) - qeth_free_vlan_buffer(card, &card->qdio. - out_qs[i]->bufs[j], vid); - } -} - -static void -qeth_free_vlan_addresses4(struct qeth_card *card, unsigned short vid) -{ - struct in_device *in_dev; - struct in_ifaddr *ifa; - struct qeth_ipaddr *addr; - - QETH_DBF_TEXT(trace, 4, "frvaddr4"); - - rcu_read_lock(); - in_dev = __in_dev_get_rcu(vlan_group_get_device(card->vlangrp, vid)); - if (!in_dev) - goto out; - for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) { - addr = qeth_get_addr_buffer(QETH_PROT_IPV4); - if (addr){ - addr->u.a4.addr = ifa->ifa_address; - addr->u.a4.mask = ifa->ifa_mask; - addr->type = QETH_IP_TYPE_NORMAL; - if (!qeth_delete_ip(card, addr)) - kfree(addr); - } - } -out: - rcu_read_unlock(); -} - -static void -qeth_free_vlan_addresses6(struct qeth_card *card, unsigned short vid) -{ -#ifdef CONFIG_QETH_IPV6 - struct inet6_dev *in6_dev; - struct inet6_ifaddr *ifa; - struct qeth_ipaddr *addr; - - QETH_DBF_TEXT(trace, 4, "frvaddr6"); - - in6_dev = in6_dev_get(vlan_group_get_device(card->vlangrp, vid)); - if (!in6_dev) - return; - for (ifa = in6_dev->addr_list; ifa; ifa = ifa->lst_next){ - addr = qeth_get_addr_buffer(QETH_PROT_IPV6); - if (addr){ - memcpy(&addr->u.a6.addr, &ifa->addr, - sizeof(struct in6_addr)); - addr->u.a6.pfxlen = ifa->prefix_len; - addr->type = QETH_IP_TYPE_NORMAL; - if (!qeth_delete_ip(card, addr)) - kfree(addr); - } - } - in6_dev_put(in6_dev); -#endif /* CONFIG_QETH_IPV6 */ -} - -static void -qeth_free_vlan_addresses(struct qeth_card *card, unsigned short vid) -{ - if (card->options.layer2 || !card->vlangrp) - return; - qeth_free_vlan_addresses4(card, vid); - qeth_free_vlan_addresses6(card, vid); -} - -static int -qeth_layer2_send_setdelvlan_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace, 2, "L2sdvcb"); - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code) { - PRINT_ERR("Error in processing VLAN %i on %s: 0x%x. " - "Continuing\n",cmd->data.setdelvlan.vlan_id, - QETH_CARD_IFNAME(card), cmd->hdr.return_code); - QETH_DBF_TEXT_(trace, 2, "L2VL%4x", cmd->hdr.command); - QETH_DBF_TEXT_(trace, 2, "L2%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 2, "err%d", cmd->hdr.return_code); - } - return 0; -} - -static int -qeth_layer2_send_setdelvlan(struct qeth_card *card, __u16 i, - enum qeth_ipa_cmds ipacmd) -{ - struct qeth_ipa_cmd *cmd; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT_(trace, 4, "L2sdv%x",ipacmd); - iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setdelvlan.vlan_id = i; - return qeth_send_ipa_cmd(card, iob, - qeth_layer2_send_setdelvlan_cb, NULL); -} - -static void -qeth_layer2_process_vlans(struct qeth_card *card, int clear) -{ - unsigned short i; - - QETH_DBF_TEXT(trace, 3, "L2prcvln"); - - if (!card->vlangrp) - return; - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - if (vlan_group_get_device(card->vlangrp, i) == NULL) - continue; - if (clear) - qeth_layer2_send_setdelvlan(card, i, IPA_CMD_DELVLAN); - else - qeth_layer2_send_setdelvlan(card, i, IPA_CMD_SETVLAN); - } -} - -/*add_vid is layer 2 used only ....*/ -static void -qeth_vlan_rx_add_vid(struct net_device *dev, unsigned short vid) -{ - struct qeth_card *card; - - QETH_DBF_TEXT_(trace, 4, "aid:%d", vid); - - card = (struct qeth_card *) dev->priv; - if (!card->options.layer2) - return; - qeth_layer2_send_setdelvlan(card, vid, IPA_CMD_SETVLAN); -} - -/*... kill_vid used for both modes*/ -static void -qeth_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) -{ - struct qeth_card *card; - unsigned long flags; - - QETH_DBF_TEXT_(trace, 4, "kid:%d", vid); - - card = (struct qeth_card *) dev->priv; - /* free all skbs for the vlan device */ - qeth_free_vlan_skbs(card, vid); - spin_lock_irqsave(&card->vlanlock, flags); - /* unregister IP addresses of vlan device */ - qeth_free_vlan_addresses(card, vid); - vlan_group_set_device(card->vlangrp, vid, NULL); - spin_unlock_irqrestore(&card->vlanlock, flags); - if (card->options.layer2) - qeth_layer2_send_setdelvlan(card, vid, IPA_CMD_DELVLAN); - qeth_set_multicast_list(card->dev); -} -#endif -/** - * Examine hardware response to SET_PROMISC_MODE - */ -static int -qeth_setadp_promisc_mode_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - struct qeth_ipacmd_setadpparms *setparms; - - QETH_DBF_TEXT(trace,4,"prmadpcb"); - - cmd = (struct qeth_ipa_cmd *) data; - setparms = &(cmd->data.setadapterparms); - - qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); - if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace,4,"prmrc%2.2x",cmd->hdr.return_code); - setparms->data.mode = SET_PROMISC_MODE_OFF; - } - card->info.promisc_mode = setparms->data.mode; - return 0; -} -/* - * Set promiscuous mode (on or off) (SET_PROMISC_MODE command) - */ -static void -qeth_setadp_promisc_mode(struct qeth_card *card) -{ - enum qeth_ipa_promisc_modes mode; - struct net_device *dev = card->dev; - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace, 4, "setprom"); - - if (((dev->flags & IFF_PROMISC) && - (card->info.promisc_mode == SET_PROMISC_MODE_ON)) || - (!(dev->flags & IFF_PROMISC) && - (card->info.promisc_mode == SET_PROMISC_MODE_OFF))) - return; - mode = SET_PROMISC_MODE_OFF; - if (dev->flags & IFF_PROMISC) - mode = SET_PROMISC_MODE_ON; - QETH_DBF_TEXT_(trace, 4, "mode:%x", mode); - - iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_PROMISC_MODE, - sizeof(struct qeth_ipacmd_setadpparms)); - cmd = (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE); - cmd->data.setadapterparms.data.mode = mode; - qeth_send_ipa_cmd(card, iob, qeth_setadp_promisc_mode_cb, NULL); -} - -/** - * set multicast address on card - */ -static void -qeth_set_multicast_list(struct net_device *dev) -{ - struct qeth_card *card = (struct qeth_card *) dev->priv; - - if (card->info.type == QETH_CARD_TYPE_OSN) - return ; - - QETH_DBF_TEXT(trace, 3, "setmulti"); - qeth_delete_mc_addresses(card); - if (card->options.layer2) { - qeth_layer2_add_multicast(card); - goto out; - } - qeth_add_multicast_ipv4(card); -#ifdef CONFIG_QETH_IPV6 - qeth_add_multicast_ipv6(card); -#endif -out: - qeth_set_ip_addr_list(card); - if (!qeth_adp_supported(card, IPA_SETADP_SET_PROMISC_MODE)) - return; - qeth_setadp_promisc_mode(card); -} - -static int -qeth_neigh_setup(struct net_device *dev, struct neigh_parms *np) -{ - return 0; -} - -static void -qeth_get_mac_for_ipm(__u32 ipm, char *mac, struct net_device *dev) -{ - if (dev->type == ARPHRD_IEEE802_TR) - ip_tr_mc_map(ipm, mac); - else - ip_eth_mc_map(ipm, mac); -} - -static struct qeth_ipaddr * -qeth_get_addr_buffer(enum qeth_prot_versions prot) -{ - struct qeth_ipaddr *addr; - - addr = kzalloc(sizeof(struct qeth_ipaddr), GFP_ATOMIC); - if (addr == NULL) { - PRINT_WARN("Not enough memory to add address\n"); - return NULL; - } - addr->type = QETH_IP_TYPE_NORMAL; - addr->proto = prot; - return addr; -} - -int -qeth_osn_assist(struct net_device *dev, - void *data, - int data_len) -{ - struct qeth_cmd_buffer *iob; - struct qeth_card *card; - int rc; - - QETH_DBF_TEXT(trace, 2, "osnsdmc"); - if (!dev) - return -ENODEV; - card = (struct qeth_card *)dev->priv; - if (!card) - return -ENODEV; - if ((card->state != CARD_STATE_UP) && - (card->state != CARD_STATE_SOFTSETUP)) - return -ENODEV; - iob = qeth_wait_for_buffer(&card->write); - memcpy(iob->data+IPA_PDU_HEADER_SIZE, data, data_len); - rc = qeth_osn_send_ipa_cmd(card, iob, data_len); - return rc; -} - -static struct net_device * -qeth_netdev_by_devno(unsigned char *read_dev_no) -{ - struct qeth_card *card; - struct net_device *ndev; - unsigned char *readno; - __u16 temp_dev_no, card_dev_no; - char *endp; - unsigned long flags; - - ndev = NULL; - memcpy(&temp_dev_no, read_dev_no, 2); - read_lock_irqsave(&qeth_card_list.rwlock, flags); - list_for_each_entry(card, &qeth_card_list.list, list) { - readno = CARD_RDEV_ID(card); - readno += (strlen(readno) - 4); - card_dev_no = simple_strtoul(readno, &endp, 16); - if (card_dev_no == temp_dev_no) { - ndev = card->dev; - break; - } - } - read_unlock_irqrestore(&qeth_card_list.rwlock, flags); - return ndev; -} - -int -qeth_osn_register(unsigned char *read_dev_no, - struct net_device **dev, - int (*assist_cb)(struct net_device *, void *), - int (*data_cb)(struct sk_buff *)) -{ - struct qeth_card * card; - - QETH_DBF_TEXT(trace, 2, "osnreg"); - *dev = qeth_netdev_by_devno(read_dev_no); - if (*dev == NULL) - return -ENODEV; - card = (struct qeth_card *)(*dev)->priv; - if (!card) - return -ENODEV; - if ((assist_cb == NULL) || (data_cb == NULL)) - return -EINVAL; - card->osn_info.assist_cb = assist_cb; - card->osn_info.data_cb = data_cb; - return 0; -} - -void -qeth_osn_deregister(struct net_device * dev) -{ - struct qeth_card *card; - - QETH_DBF_TEXT(trace, 2, "osndereg"); - if (!dev) - return; - card = (struct qeth_card *)dev->priv; - if (!card) - return; - card->osn_info.assist_cb = NULL; - card->osn_info.data_cb = NULL; - return; -} - -static void -qeth_delete_mc_addresses(struct qeth_card *card) -{ - struct qeth_ipaddr *iptodo; - unsigned long flags; - - QETH_DBF_TEXT(trace,4,"delmc"); - iptodo = qeth_get_addr_buffer(QETH_PROT_IPV4); - if (!iptodo) { - QETH_DBF_TEXT(trace, 2, "dmcnomem"); - return; - } - iptodo->type = QETH_IP_TYPE_DEL_ALL_MC; - spin_lock_irqsave(&card->ip_lock, flags); - if (!__qeth_insert_ip_todo(card, iptodo, 0)) - kfree(iptodo); - spin_unlock_irqrestore(&card->ip_lock, flags); -} - -static void -qeth_add_mc(struct qeth_card *card, struct in_device *in4_dev) -{ - struct qeth_ipaddr *ipm; - struct ip_mc_list *im4; - char buf[MAX_ADDR_LEN]; - - QETH_DBF_TEXT(trace,4,"addmc"); - for (im4 = in4_dev->mc_list; im4; im4 = im4->next) { - qeth_get_mac_for_ipm(im4->multiaddr, buf, in4_dev->dev); - ipm = qeth_get_addr_buffer(QETH_PROT_IPV4); - if (!ipm) - continue; - ipm->u.a4.addr = im4->multiaddr; - memcpy(ipm->mac,buf,OSA_ADDR_LEN); - ipm->is_multicast = 1; - if (!qeth_add_ip(card,ipm)) - kfree(ipm); - } -} - -static inline void -qeth_add_vlan_mc(struct qeth_card *card) -{ -#ifdef CONFIG_QETH_VLAN - struct in_device *in_dev; - struct vlan_group *vg; - int i; - - QETH_DBF_TEXT(trace,4,"addmcvl"); - if ( ((card->options.layer2 == 0) && - (!qeth_is_supported(card,IPA_FULL_VLAN))) || - (card->vlangrp == NULL) ) - return ; - - vg = card->vlangrp; - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - struct net_device *netdev = vlan_group_get_device(vg, i); - if (netdev == NULL || - !(netdev->flags & IFF_UP)) - continue; - in_dev = in_dev_get(netdev); - if (!in_dev) - continue; - read_lock(&in_dev->mc_list_lock); - qeth_add_mc(card,in_dev); - read_unlock(&in_dev->mc_list_lock); - in_dev_put(in_dev); - } -#endif -} - -static void -qeth_add_multicast_ipv4(struct qeth_card *card) -{ - struct in_device *in4_dev; - - QETH_DBF_TEXT(trace,4,"chkmcv4"); - in4_dev = in_dev_get(card->dev); - if (in4_dev == NULL) - return; - read_lock(&in4_dev->mc_list_lock); - qeth_add_mc(card, in4_dev); - qeth_add_vlan_mc(card); - read_unlock(&in4_dev->mc_list_lock); - in_dev_put(in4_dev); -} - -static void -qeth_layer2_add_multicast(struct qeth_card *card) -{ - struct qeth_ipaddr *ipm; - struct dev_mc_list *dm; - - QETH_DBF_TEXT(trace,4,"L2addmc"); - for (dm = card->dev->mc_list; dm; dm = dm->next) { - ipm = qeth_get_addr_buffer(QETH_PROT_IPV4); - if (!ipm) - continue; - memcpy(ipm->mac,dm->dmi_addr,MAX_ADDR_LEN); - ipm->is_multicast = 1; - if (!qeth_add_ip(card, ipm)) - kfree(ipm); - } -} - -#ifdef CONFIG_QETH_IPV6 -static void -qeth_add_mc6(struct qeth_card *card, struct inet6_dev *in6_dev) -{ - struct qeth_ipaddr *ipm; - struct ifmcaddr6 *im6; - char buf[MAX_ADDR_LEN]; - - QETH_DBF_TEXT(trace,4,"addmc6"); - for (im6 = in6_dev->mc_list; im6 != NULL; im6 = im6->next) { - ndisc_mc_map(&im6->mca_addr, buf, in6_dev->dev, 0); - ipm = qeth_get_addr_buffer(QETH_PROT_IPV6); - if (!ipm) - continue; - ipm->is_multicast = 1; - memcpy(ipm->mac,buf,OSA_ADDR_LEN); - memcpy(&ipm->u.a6.addr,&im6->mca_addr.s6_addr, - sizeof(struct in6_addr)); - if (!qeth_add_ip(card,ipm)) - kfree(ipm); - } -} - -static inline void -qeth_add_vlan_mc6(struct qeth_card *card) -{ -#ifdef CONFIG_QETH_VLAN - struct inet6_dev *in_dev; - struct vlan_group *vg; - int i; - - QETH_DBF_TEXT(trace,4,"admc6vl"); - if ( ((card->options.layer2 == 0) && - (!qeth_is_supported(card,IPA_FULL_VLAN))) || - (card->vlangrp == NULL)) - return ; - - vg = card->vlangrp; - for (i = 0; i < VLAN_GROUP_ARRAY_LEN; i++) { - struct net_device *netdev = vlan_group_get_device(vg, i); - if (netdev == NULL || - !(netdev->flags & IFF_UP)) - continue; - in_dev = in6_dev_get(netdev); - if (!in_dev) - continue; - read_lock_bh(&in_dev->lock); - qeth_add_mc6(card,in_dev); - read_unlock_bh(&in_dev->lock); - in6_dev_put(in_dev); - } -#endif /* CONFIG_QETH_VLAN */ -} - -static void -qeth_add_multicast_ipv6(struct qeth_card *card) -{ - struct inet6_dev *in6_dev; - - QETH_DBF_TEXT(trace,4,"chkmcv6"); - if (!qeth_is_supported(card, IPA_IPV6)) - return ; - in6_dev = in6_dev_get(card->dev); - if (in6_dev == NULL) - return; - read_lock_bh(&in6_dev->lock); - qeth_add_mc6(card, in6_dev); - qeth_add_vlan_mc6(card); - read_unlock_bh(&in6_dev->lock); - in6_dev_put(in6_dev); -} -#endif /* CONFIG_QETH_IPV6 */ - -static int -qeth_layer2_send_setdelmac(struct qeth_card *card, __u8 *mac, - enum qeth_ipa_cmds ipacmd, - int (*reply_cb) (struct qeth_card *, - struct qeth_reply*, - unsigned long)) -{ - struct qeth_ipa_cmd *cmd; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(trace, 2, "L2sdmac"); - iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setdelmac.mac_length = OSA_ADDR_LEN; - memcpy(&cmd->data.setdelmac.mac, mac, OSA_ADDR_LEN); - return qeth_send_ipa_cmd(card, iob, reply_cb, NULL); -} - -static int -qeth_layer2_send_setgroupmac_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - __u8 *mac; - - QETH_DBF_TEXT(trace, 2, "L2Sgmacb"); - cmd = (struct qeth_ipa_cmd *) data; - mac = &cmd->data.setdelmac.mac[0]; - /* MAC already registered, needed in couple/uncouple case */ - if (cmd->hdr.return_code == 0x2005) { - PRINT_WARN("Group MAC %02x:%02x:%02x:%02x:%02x:%02x " \ - "already existing on %s \n", - mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], - QETH_CARD_IFNAME(card)); - cmd->hdr.return_code = 0; - } - if (cmd->hdr.return_code) - PRINT_ERR("Could not set group MAC " \ - "%02x:%02x:%02x:%02x:%02x:%02x on %s: %x\n", - mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], - QETH_CARD_IFNAME(card),cmd->hdr.return_code); - return 0; -} - -static int -qeth_layer2_send_setgroupmac(struct qeth_card *card, __u8 *mac) -{ - QETH_DBF_TEXT(trace, 2, "L2Sgmac"); - return qeth_layer2_send_setdelmac(card, mac, IPA_CMD_SETGMAC, - qeth_layer2_send_setgroupmac_cb); -} - -static int -qeth_layer2_send_delgroupmac_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - __u8 *mac; - - QETH_DBF_TEXT(trace, 2, "L2Dgmacb"); - cmd = (struct qeth_ipa_cmd *) data; - mac = &cmd->data.setdelmac.mac[0]; - if (cmd->hdr.return_code) - PRINT_ERR("Could not delete group MAC " \ - "%02x:%02x:%02x:%02x:%02x:%02x on %s: %x\n", - mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], - QETH_CARD_IFNAME(card), cmd->hdr.return_code); - return 0; -} - -static int -qeth_layer2_send_delgroupmac(struct qeth_card *card, __u8 *mac) -{ - QETH_DBF_TEXT(trace, 2, "L2Dgmac"); - return qeth_layer2_send_setdelmac(card, mac, IPA_CMD_DELGMAC, - qeth_layer2_send_delgroupmac_cb); -} - -static int -qeth_layer2_send_setmac_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace, 2, "L2Smaccb"); - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 2, "L2er%x", cmd->hdr.return_code); - card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; - cmd->hdr.return_code = -EIO; - } else { - card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; - memcpy(card->dev->dev_addr,cmd->data.setdelmac.mac, - OSA_ADDR_LEN); - PRINT_INFO("MAC address %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x " - "successfully registered on device %s\n", - card->dev->dev_addr[0], card->dev->dev_addr[1], - card->dev->dev_addr[2], card->dev->dev_addr[3], - card->dev->dev_addr[4], card->dev->dev_addr[5], - card->dev->name); - } - return 0; -} - -static int -qeth_layer2_send_setmac(struct qeth_card *card, __u8 *mac) -{ - QETH_DBF_TEXT(trace, 2, "L2Setmac"); - return qeth_layer2_send_setdelmac(card, mac, IPA_CMD_SETVMAC, - qeth_layer2_send_setmac_cb); -} - -static int -qeth_layer2_send_delmac_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace, 2, "L2Dmaccb"); - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 2, "err%d", cmd->hdr.return_code); - cmd->hdr.return_code = -EIO; - return 0; - } - card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; - - return 0; -} -static int -qeth_layer2_send_delmac(struct qeth_card *card, __u8 *mac) -{ - QETH_DBF_TEXT(trace, 2, "L2Delmac"); - if (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) - return 0; - return qeth_layer2_send_setdelmac(card, mac, IPA_CMD_DELVMAC, - qeth_layer2_send_delmac_cb); -} - -static int -qeth_layer2_set_mac_address(struct net_device *dev, void *p) -{ - struct sockaddr *addr = p; - struct qeth_card *card; - int rc = 0; - - QETH_DBF_TEXT(trace, 3, "setmac"); - - if (qeth_verify_dev(dev) != QETH_REAL_CARD) { - QETH_DBF_TEXT(trace, 3, "setmcINV"); - return -EOPNOTSUPP; - } - card = (struct qeth_card *) dev->priv; - - if (!card->options.layer2) { - PRINT_WARN("Setting MAC address on %s is not supported " - "in Layer 3 mode.\n", dev->name); - QETH_DBF_TEXT(trace, 3, "setmcLY3"); - return -EOPNOTSUPP; - } - if (card->info.type == QETH_CARD_TYPE_OSN) { - PRINT_WARN("Setting MAC address on %s is not supported.\n", - dev->name); - QETH_DBF_TEXT(trace, 3, "setmcOSN"); - return -EOPNOTSUPP; - } - QETH_DBF_TEXT_(trace, 3, "%s", CARD_BUS_ID(card)); - QETH_DBF_HEX(trace, 3, addr->sa_data, OSA_ADDR_LEN); - rc = qeth_layer2_send_delmac(card, &card->dev->dev_addr[0]); - if (!rc) - rc = qeth_layer2_send_setmac(card, addr->sa_data); - return rc; -} - -static void -qeth_fill_ipacmd_header(struct qeth_card *card, struct qeth_ipa_cmd *cmd, - __u8 command, enum qeth_prot_versions prot) -{ - memset(cmd, 0, sizeof (struct qeth_ipa_cmd)); - cmd->hdr.command = command; - cmd->hdr.initiator = IPA_CMD_INITIATOR_HOST; - cmd->hdr.seqno = card->seqno.ipa; - cmd->hdr.adapter_type = qeth_get_ipa_adp_type(card->info.link_type); - cmd->hdr.rel_adapter_no = (__u8) card->info.portno; - if (card->options.layer2) - cmd->hdr.prim_version_no = 2; - else - cmd->hdr.prim_version_no = 1; - cmd->hdr.param_count = 1; - cmd->hdr.prot_version = prot; - cmd->hdr.ipa_supported = 0; - cmd->hdr.ipa_enabled = 0; -} - -static struct qeth_cmd_buffer * -qeth_get_ipacmd_buffer(struct qeth_card *card, enum qeth_ipa_cmds ipacmd, - enum qeth_prot_versions prot) -{ - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - iob = qeth_wait_for_buffer(&card->write); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - qeth_fill_ipacmd_header(card, cmd, ipacmd, prot); - - return iob; -} - -static int -qeth_send_setdelmc(struct qeth_card *card, struct qeth_ipaddr *addr, int ipacmd) -{ - int rc; - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"setdelmc"); - - iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - memcpy(&cmd->data.setdelipm.mac,addr->mac, OSA_ADDR_LEN); - if (addr->proto == QETH_PROT_IPV6) - memcpy(cmd->data.setdelipm.ip6, &addr->u.a6.addr, - sizeof(struct in6_addr)); - else - memcpy(&cmd->data.setdelipm.ip4, &addr->u.a4.addr,4); - - rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); - - return rc; -} -static void -qeth_fill_netmask(u8 *netmask, unsigned int len) -{ - int i,j; - for (i=0;i<16;i++) { - j=(len)-(i*8); - if (j >= 8) - netmask[i] = 0xff; - else if (j > 0) - netmask[i] = (u8)(0xFF00>>j); - else - netmask[i] = 0; - } -} - -static int -qeth_send_setdelip(struct qeth_card *card, struct qeth_ipaddr *addr, - int ipacmd, unsigned int flags) -{ - int rc; - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - __u8 netmask[16]; - - QETH_DBF_TEXT(trace,4,"setdelip"); - QETH_DBF_TEXT_(trace,4,"flags%02X", flags); - - iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - if (addr->proto == QETH_PROT_IPV6) { - memcpy(cmd->data.setdelip6.ip_addr, &addr->u.a6.addr, - sizeof(struct in6_addr)); - qeth_fill_netmask(netmask,addr->u.a6.pfxlen); - memcpy(cmd->data.setdelip6.mask, netmask, - sizeof(struct in6_addr)); - cmd->data.setdelip6.flags = flags; - } else { - memcpy(cmd->data.setdelip4.ip_addr, &addr->u.a4.addr, 4); - memcpy(cmd->data.setdelip4.mask, &addr->u.a4.mask, 4); - cmd->data.setdelip4.flags = flags; - } - - rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); - - return rc; -} - -static int -qeth_layer2_register_addr_entry(struct qeth_card *card, - struct qeth_ipaddr *addr) -{ - if (!addr->is_multicast) - return 0; - QETH_DBF_TEXT(trace, 2, "setgmac"); - QETH_DBF_HEX(trace,3,&addr->mac[0],OSA_ADDR_LEN); - return qeth_layer2_send_setgroupmac(card, &addr->mac[0]); -} - -static int -qeth_layer2_deregister_addr_entry(struct qeth_card *card, - struct qeth_ipaddr *addr) -{ - if (!addr->is_multicast) - return 0; - QETH_DBF_TEXT(trace, 2, "delgmac"); - QETH_DBF_HEX(trace,3,&addr->mac[0],OSA_ADDR_LEN); - return qeth_layer2_send_delgroupmac(card, &addr->mac[0]); -} - -static int -qeth_layer3_register_addr_entry(struct qeth_card *card, - struct qeth_ipaddr *addr) -{ - char buf[50]; - int rc; - int cnt = 3; - - if (addr->proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2,"setaddr4"); - QETH_DBF_HEX(trace, 3, &addr->u.a4.addr, sizeof(int)); - } else if (addr->proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "setaddr6"); - QETH_DBF_HEX(trace,3,&addr->u.a6.addr,8); - QETH_DBF_HEX(trace,3,((char *)&addr->u.a6.addr)+8,8); - } else { - QETH_DBF_TEXT(trace, 2, "setaddr?"); - QETH_DBF_HEX(trace, 3, addr, sizeof(struct qeth_ipaddr)); - } - do { - if (addr->is_multicast) - rc = qeth_send_setdelmc(card, addr, IPA_CMD_SETIPM); - else - rc = qeth_send_setdelip(card, addr, IPA_CMD_SETIP, - addr->set_flags); - if (rc) - QETH_DBF_TEXT(trace, 2, "failed"); - } while ((--cnt > 0) && rc); - if (rc){ - QETH_DBF_TEXT(trace, 2, "FAILED"); - qeth_ipaddr_to_string(addr->proto, (u8 *)&addr->u, buf); - PRINT_WARN("Could not register IP address %s (rc=0x%x/%d)\n", - buf, rc, rc); - } - return rc; -} - -static int -qeth_layer3_deregister_addr_entry(struct qeth_card *card, - struct qeth_ipaddr *addr) -{ - //char buf[50]; - int rc; - - if (addr->proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2,"deladdr4"); - QETH_DBF_HEX(trace, 3, &addr->u.a4.addr, sizeof(int)); - } else if (addr->proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "deladdr6"); - QETH_DBF_HEX(trace,3,&addr->u.a6.addr,8); - QETH_DBF_HEX(trace,3,((char *)&addr->u.a6.addr)+8,8); - } else { - QETH_DBF_TEXT(trace, 2, "deladdr?"); - QETH_DBF_HEX(trace, 3, addr, sizeof(struct qeth_ipaddr)); - } - if (addr->is_multicast) - rc = qeth_send_setdelmc(card, addr, IPA_CMD_DELIPM); - else - rc = qeth_send_setdelip(card, addr, IPA_CMD_DELIP, - addr->del_flags); - if (rc) { - QETH_DBF_TEXT(trace, 2, "failed"); - /* TODO: re-activate this warning as soon as we have a - * clean mirco code - qeth_ipaddr_to_string(addr->proto, (u8 *)&addr->u, buf); - PRINT_WARN("Could not deregister IP address %s (rc=%x)\n", - buf, rc); - */ - } - return rc; -} - -static int -qeth_register_addr_entry(struct qeth_card *card, struct qeth_ipaddr *addr) -{ - if (card->options.layer2) - return qeth_layer2_register_addr_entry(card, addr); - - return qeth_layer3_register_addr_entry(card, addr); -} - -static int -qeth_deregister_addr_entry(struct qeth_card *card, struct qeth_ipaddr *addr) -{ - if (card->options.layer2) - return qeth_layer2_deregister_addr_entry(card, addr); - - return qeth_layer3_deregister_addr_entry(card, addr); -} - -static u32 -qeth_ethtool_get_tx_csum(struct net_device *dev) -{ - return (dev->features & NETIF_F_HW_CSUM) != 0; -} - -static int -qeth_ethtool_set_tx_csum(struct net_device *dev, u32 data) -{ - if (data) - dev->features |= NETIF_F_HW_CSUM; - else - dev->features &= ~NETIF_F_HW_CSUM; - - return 0; -} - -static u32 -qeth_ethtool_get_rx_csum(struct net_device *dev) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - - return (card->options.checksum_type == HW_CHECKSUMMING); -} - -static int -qeth_ethtool_set_rx_csum(struct net_device *dev, u32 data) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - if (data) - card->options.checksum_type = HW_CHECKSUMMING; - else - card->options.checksum_type = SW_CHECKSUMMING; - return 0; -} - -static u32 -qeth_ethtool_get_sg(struct net_device *dev) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - - return ((card->options.large_send != QETH_LARGE_SEND_NO) && - (dev->features & NETIF_F_SG)); -} - -static int -qeth_ethtool_set_sg(struct net_device *dev, u32 data) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - - if (data) { - if (card->options.large_send != QETH_LARGE_SEND_NO) - dev->features |= NETIF_F_SG; - else { - dev->features &= ~NETIF_F_SG; - return -EINVAL; - } - } else - dev->features &= ~NETIF_F_SG; - return 0; -} - -static u32 -qeth_ethtool_get_tso(struct net_device *dev) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - - return ((card->options.large_send != QETH_LARGE_SEND_NO) && - (dev->features & NETIF_F_TSO)); -} - -static int -qeth_ethtool_set_tso(struct net_device *dev, u32 data) -{ - struct qeth_card *card = (struct qeth_card *)dev->priv; - - if (data) { - if (card->options.large_send != QETH_LARGE_SEND_NO) - dev->features |= NETIF_F_TSO; - else { - dev->features &= ~NETIF_F_TSO; - return -EINVAL; - } - } else - dev->features &= ~NETIF_F_TSO; - return 0; -} - -static struct ethtool_ops qeth_ethtool_ops = { - .get_tx_csum = qeth_ethtool_get_tx_csum, - .set_tx_csum = qeth_ethtool_set_tx_csum, - .get_rx_csum = qeth_ethtool_get_rx_csum, - .set_rx_csum = qeth_ethtool_set_rx_csum, - .get_sg = qeth_ethtool_get_sg, - .set_sg = qeth_ethtool_set_sg, - .get_tso = qeth_ethtool_get_tso, - .set_tso = qeth_ethtool_set_tso, -}; - -static int -qeth_hard_header_parse(const struct sk_buff *skb, unsigned char *haddr) -{ - const struct qeth_card *card; - const struct ethhdr *eth; - struct net_device *dev = skb->dev; - - if (dev->type != ARPHRD_IEEE802_TR) - return 0; - - card = qeth_get_card_from_dev(dev); - if (card->options.layer2) - goto haveheader; -#ifdef CONFIG_QETH_IPV6 - /* cause of the manipulated arp constructor and the ARP - flag for OSAE devices we have some nasty exceptions */ - if (card->info.type == QETH_CARD_TYPE_OSAE) { - if (!card->options.fake_ll) { - if ((skb->pkt_type==PACKET_OUTGOING) && - (skb->protocol==ETH_P_IPV6)) - goto haveheader; - else - return 0; - } else { - if ((skb->pkt_type==PACKET_OUTGOING) && - (skb->protocol==ETH_P_IP)) - return 0; - else - goto haveheader; - } - } -#endif - if (!card->options.fake_ll) - return 0; -haveheader: - eth = eth_hdr(skb); - memcpy(haddr, eth->h_source, ETH_ALEN); - return ETH_ALEN; -} - -static const struct header_ops qeth_null_ops = { - .parse = qeth_hard_header_parse, -}; - -static int -qeth_netdev_init(struct net_device *dev) -{ - struct qeth_card *card; - - card = (struct qeth_card *) dev->priv; - - QETH_DBF_TEXT(trace,3,"initdev"); - - dev->tx_timeout = &qeth_tx_timeout; - dev->watchdog_timeo = QETH_TX_TIMEOUT; - dev->open = qeth_open; - dev->stop = qeth_stop; - dev->hard_start_xmit = qeth_hard_start_xmit; - dev->do_ioctl = qeth_do_ioctl; - dev->get_stats = qeth_get_stats; - dev->change_mtu = qeth_change_mtu; - dev->neigh_setup = qeth_neigh_setup; - dev->set_multicast_list = qeth_set_multicast_list; -#ifdef CONFIG_QETH_VLAN - dev->vlan_rx_register = qeth_vlan_rx_register; - dev->vlan_rx_kill_vid = qeth_vlan_rx_kill_vid; - dev->vlan_rx_add_vid = qeth_vlan_rx_add_vid; -#endif - if (qeth_get_netdev_flags(card) & IFF_NOARP) - dev->header_ops = &qeth_null_ops; - -#ifdef CONFIG_QETH_IPV6 - /*IPv6 address autoconfiguration stuff*/ - if (!(card->info.unique_id & UNIQUE_ID_NOT_BY_CARD)) - card->dev->dev_id = card->info.unique_id & 0xffff; -#endif - if (card->options.fake_ll && - (qeth_get_netdev_flags(card) & IFF_NOARP)) - dev->header_ops = &qeth_fake_ops; - - dev->set_mac_address = qeth_layer2_set_mac_address; - dev->flags |= qeth_get_netdev_flags(card); - if ((card->options.fake_broadcast) || - (card->info.broadcast_capable)) - dev->flags |= IFF_BROADCAST; - dev->hard_header_len = - qeth_get_hlen(card->info.link_type) + card->options.add_hhlen; - dev->addr_len = OSA_ADDR_LEN; - dev->mtu = card->info.initial_mtu; - if (card->info.type != QETH_CARD_TYPE_OSN) - SET_ETHTOOL_OPS(dev, &qeth_ethtool_ops); - return 0; -} - -static void -qeth_init_func_level(struct qeth_card *card) -{ - if (card->ipato.enabled) { - if (card->info.type == QETH_CARD_TYPE_IQD) - card->info.func_level = - QETH_IDX_FUNC_LEVEL_IQD_ENA_IPAT; - else - card->info.func_level = - QETH_IDX_FUNC_LEVEL_OSAE_ENA_IPAT; - } else { - if (card->info.type == QETH_CARD_TYPE_IQD) - /*FIXME:why do we have same values for dis and ena for osae??? */ - card->info.func_level = - QETH_IDX_FUNC_LEVEL_IQD_DIS_IPAT; - else - card->info.func_level = - QETH_IDX_FUNC_LEVEL_OSAE_DIS_IPAT; - } -} - -/** - * hardsetup card, initialize MPC and QDIO stuff - */ -static int -qeth_hardsetup_card(struct qeth_card *card) -{ - int retries = 3; - int rc; - - QETH_DBF_TEXT(setup, 2, "hrdsetup"); - - atomic_set(&card->force_alloc_skb, 0); -retry: - if (retries < 3){ - PRINT_WARN("Retrying to do IDX activates.\n"); - ccw_device_set_offline(CARD_DDEV(card)); - ccw_device_set_offline(CARD_WDEV(card)); - ccw_device_set_offline(CARD_RDEV(card)); - ccw_device_set_online(CARD_RDEV(card)); - ccw_device_set_online(CARD_WDEV(card)); - ccw_device_set_online(CARD_DDEV(card)); - } - rc = qeth_qdio_clear_card(card,card->info.type!=QETH_CARD_TYPE_IQD); - if (rc == -ERESTARTSYS) { - QETH_DBF_TEXT(setup, 2, "break1"); - return rc; - } else if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - if (--retries < 0) - goto out; - else - goto retry; - } - if ((rc = qeth_get_unitaddr(card))){ - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - return rc; - } - qeth_init_tokens(card); - qeth_init_func_level(card); - rc = qeth_idx_activate_channel(&card->read, qeth_idx_read_cb); - if (rc == -ERESTARTSYS) { - QETH_DBF_TEXT(setup, 2, "break2"); - return rc; - } else if (rc) { - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); - if (--retries < 0) - goto out; - else - goto retry; - } - rc = qeth_idx_activate_channel(&card->write, qeth_idx_write_cb); - if (rc == -ERESTARTSYS) { - QETH_DBF_TEXT(setup, 2, "break3"); - return rc; - } else if (rc) { - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); - if (--retries < 0) - goto out; - else - goto retry; - } - if ((rc = qeth_mpc_initialize(card))){ - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); - goto out; - } - /*network device will be recovered*/ - if (card->dev) { - card->dev->header_ops = card->orig_header_ops; - if (card->options.fake_ll && - (qeth_get_netdev_flags(card) & IFF_NOARP)) - card->dev->header_ops = &qeth_fake_ops; - return 0; - } - /* at first set_online allocate netdev */ - card->dev = qeth_get_netdevice(card->info.type, - card->info.link_type); - if (!card->dev){ - qeth_qdio_clear_card(card, card->info.type != - QETH_CARD_TYPE_IQD); - rc = -ENODEV; - QETH_DBF_TEXT_(setup, 2, "6err%d", rc); - goto out; - } - card->dev->priv = card; - card->orig_header_ops = card->dev->header_ops; - card->dev->type = qeth_get_arphdr_type(card->info.type, - card->info.link_type); - card->dev->init = qeth_netdev_init; - return 0; -out: - PRINT_ERR("Initialization in hardsetup failed! rc=%d\n", rc); - return rc; -} - -static int -qeth_default_setassparms_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"defadpcb"); - - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code == 0){ - cmd->hdr.return_code = cmd->data.setassparms.hdr.return_code; - if (cmd->hdr.prot_version == QETH_PROT_IPV4) - card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled; -#ifdef CONFIG_QETH_IPV6 - if (cmd->hdr.prot_version == QETH_PROT_IPV6) - card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; -#endif - } - if (cmd->data.setassparms.hdr.assist_no == IPA_INBOUND_CHECKSUM && - cmd->data.setassparms.hdr.command_code == IPA_CMD_ASS_START) { - card->info.csum_mask = cmd->data.setassparms.data.flags_32bit; - QETH_DBF_TEXT_(trace, 3, "csum:%d", card->info.csum_mask); - } - return 0; -} - -static int -qeth_default_setadapterparms_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"defadpcb"); - - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code == 0) - cmd->hdr.return_code = cmd->data.setadapterparms.hdr.return_code; - return 0; -} - - - -static int -qeth_query_setadapterparms_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,3,"quyadpcb"); - - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->data.setadapterparms.data.query_cmds_supp.lan_type & 0x7f) - card->info.link_type = - cmd->data.setadapterparms.data.query_cmds_supp.lan_type; - card->options.adp.supported_funcs = - cmd->data.setadapterparms.data.query_cmds_supp.supported_cmds; - return qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); -} - -static int -qeth_query_setadapterparms(struct qeth_card *card) -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(trace,3,"queryadp"); - iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_COMMANDS_SUPPORTED, - sizeof(struct qeth_ipacmd_setadpparms)); - rc = qeth_send_ipa_cmd(card, iob, qeth_query_setadapterparms_cb, NULL); - return rc; -} - -static int -qeth_setadpparms_change_macaddr_cb(struct qeth_card *card, - struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"chgmaccb"); - - cmd = (struct qeth_ipa_cmd *) data; - if (!card->options.layer2 || - !(card->info.mac_bits & QETH_LAYER2_MAC_READ)) { - memcpy(card->dev->dev_addr, - &cmd->data.setadapterparms.data.change_addr.addr, - OSA_ADDR_LEN); - card->info.mac_bits |= QETH_LAYER2_MAC_READ; - } - qeth_default_setadapterparms_cb(card, reply, (unsigned long) cmd); - return 0; -} - -static int -qeth_setadpparms_change_macaddr(struct qeth_card *card) -{ - int rc; - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"chgmac"); - - iob = qeth_get_adapter_cmd(card,IPA_SETADP_ALTER_MAC_ADDRESS, - sizeof(struct qeth_ipacmd_setadpparms)); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setadapterparms.data.change_addr.cmd = CHANGE_ADDR_READ_MAC; - cmd->data.setadapterparms.data.change_addr.addr_size = OSA_ADDR_LEN; - memcpy(&cmd->data.setadapterparms.data.change_addr.addr, - card->dev->dev_addr, OSA_ADDR_LEN); - rc = qeth_send_ipa_cmd(card, iob, qeth_setadpparms_change_macaddr_cb, - NULL); - return rc; -} - -static int -qeth_send_setadp_mode(struct qeth_card *card, __u32 command, __u32 mode) -{ - int rc; - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"adpmode"); - - iob = qeth_get_adapter_cmd(card, command, - sizeof(struct qeth_ipacmd_setadpparms)); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setadapterparms.data.mode = mode; - rc = qeth_send_ipa_cmd(card, iob, qeth_default_setadapterparms_cb, - NULL); - return rc; -} - -static int -qeth_setadapter_hstr(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,4,"adphstr"); - - if (qeth_adp_supported(card,IPA_SETADP_SET_BROADCAST_MODE)) { - rc = qeth_send_setadp_mode(card, IPA_SETADP_SET_BROADCAST_MODE, - card->options.broadcast_mode); - if (rc) - PRINT_WARN("couldn't set broadcast mode on " - "device %s: x%x\n", - CARD_BUS_ID(card), rc); - rc = qeth_send_setadp_mode(card, IPA_SETADP_ALTER_MAC_ADDRESS, - card->options.macaddr_mode); - if (rc) - PRINT_WARN("couldn't set macaddr mode on " - "device %s: x%x\n", CARD_BUS_ID(card), rc); - return rc; - } - if (card->options.broadcast_mode == QETH_TR_BROADCAST_LOCAL) - PRINT_WARN("set adapter parameters not available " - "to set broadcast mode, using ALLRINGS " - "on device %s:\n", CARD_BUS_ID(card)); - if (card->options.macaddr_mode == QETH_TR_MACADDR_CANONICAL) - PRINT_WARN("set adapter parameters not available " - "to set macaddr mode, using NONCANONICAL " - "on device %s:\n", CARD_BUS_ID(card)); - return 0; -} - -static int -qeth_setadapter_parms(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(setup, 2, "setadprm"); - - if (!qeth_is_supported(card, IPA_SETADAPTERPARMS)){ - PRINT_WARN("set adapter parameters not supported " - "on device %s.\n", - CARD_BUS_ID(card)); - QETH_DBF_TEXT(setup, 2, " notsupp"); - return 0; - } - rc = qeth_query_setadapterparms(card); - if (rc) { - PRINT_WARN("couldn't set adapter parameters on device %s: " - "x%x\n", CARD_BUS_ID(card), rc); - return rc; - } - if (qeth_adp_supported(card,IPA_SETADP_ALTER_MAC_ADDRESS)) { - rc = qeth_setadpparms_change_macaddr(card); - if (rc) - PRINT_WARN("couldn't get MAC address on " - "device %s: x%x\n", - CARD_BUS_ID(card), rc); - } - - if ((card->info.link_type == QETH_LINK_TYPE_HSTR) || - (card->info.link_type == QETH_LINK_TYPE_LANE_TR)) - rc = qeth_setadapter_hstr(card); - - return rc; -} - -static int -qeth_layer2_initialize(struct qeth_card *card) -{ - int rc = 0; - - - QETH_DBF_TEXT(setup, 2, "doL2init"); - QETH_DBF_TEXT_(setup, 2, "doL2%s", CARD_BUS_ID(card)); - - rc = qeth_query_setadapterparms(card); - if (rc) { - PRINT_WARN("could not query adapter parameters on device %s: " - "x%x\n", CARD_BUS_ID(card), rc); - } - - rc = qeth_setadpparms_change_macaddr(card); - if (rc) { - PRINT_WARN("couldn't get MAC address on " - "device %s: x%x\n", - CARD_BUS_ID(card), rc); - QETH_DBF_TEXT_(setup, 2,"1err%d",rc); - return rc; - } - QETH_DBF_HEX(setup,2, card->dev->dev_addr, OSA_ADDR_LEN); - - rc = qeth_layer2_send_setmac(card, &card->dev->dev_addr[0]); - if (rc) - QETH_DBF_TEXT_(setup, 2,"2err%d",rc); - return 0; -} - - -static int -qeth_send_startstoplan(struct qeth_card *card, enum qeth_ipa_cmds ipacmd, - enum qeth_prot_versions prot) -{ - int rc; - struct qeth_cmd_buffer *iob; - - iob = qeth_get_ipacmd_buffer(card,ipacmd,prot); - rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); - - return rc; -} - -static int -qeth_send_startlan(struct qeth_card *card, enum qeth_prot_versions prot) -{ - int rc; - - QETH_DBF_TEXT_(setup, 2, "strtlan%i", prot); - - rc = qeth_send_startstoplan(card, IPA_CMD_STARTLAN, prot); - return rc; -} - -static int -qeth_send_stoplan(struct qeth_card *card) -{ - int rc = 0; - - /* - * TODO: according to the IPA format document page 14, - * TCP/IP (we!) never issue a STOPLAN - * is this right ?!? - */ - QETH_DBF_TEXT(trace, 2, "stoplan"); - - rc = qeth_send_startstoplan(card, IPA_CMD_STOPLAN, QETH_PROT_IPV4); - return rc; -} - -static int -qeth_query_ipassists_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(setup, 2, "qipasscb"); - - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.prot_version == QETH_PROT_IPV4) { - card->options.ipa4.supported_funcs = cmd->hdr.ipa_supported; - card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled; - /* Disable IPV6 support hard coded for Hipersockets */ - if(card->info.type == QETH_CARD_TYPE_IQD) - card->options.ipa4.supported_funcs &= ~IPA_IPV6; - } else { -#ifdef CONFIG_QETH_IPV6 - card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported; - card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; -#endif - } - QETH_DBF_TEXT(setup, 2, "suppenbl"); - QETH_DBF_TEXT_(setup, 2, "%x",cmd->hdr.ipa_supported); - QETH_DBF_TEXT_(setup, 2, "%x",cmd->hdr.ipa_enabled); - return 0; -} - -static int -qeth_query_ipassists(struct qeth_card *card, enum qeth_prot_versions prot) -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT_(setup, 2, "qipassi%i", prot); - if (card->options.layer2) { - QETH_DBF_TEXT(setup, 2, "noprmly2"); - return -EPERM; - } - - iob = qeth_get_ipacmd_buffer(card,IPA_CMD_QIPASSIST,prot); - rc = qeth_send_ipa_cmd(card, iob, qeth_query_ipassists_cb, NULL); - return rc; -} - -static struct qeth_cmd_buffer * -qeth_get_setassparms_cmd(struct qeth_card *card, enum qeth_ipa_funcs ipa_func, - __u16 cmd_code, __u16 len, - enum qeth_prot_versions prot) -{ - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"getasscm"); - iob = qeth_get_ipacmd_buffer(card,IPA_CMD_SETASSPARMS,prot); - - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setassparms.hdr.assist_no = ipa_func; - cmd->data.setassparms.hdr.length = 8 + len; - cmd->data.setassparms.hdr.command_code = cmd_code; - cmd->data.setassparms.hdr.return_code = 0; - cmd->data.setassparms.hdr.seq_no = 0; - - return iob; -} - -static int -qeth_send_setassparms(struct qeth_card *card, struct qeth_cmd_buffer *iob, - __u16 len, long data, - int (*reply_cb) - (struct qeth_card *,struct qeth_reply *,unsigned long), - void *reply_param) -{ - int rc; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,4,"sendassp"); - - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - if (len <= sizeof(__u32)) - cmd->data.setassparms.data.flags_32bit = (__u32) data; - else /* (len > sizeof(__u32)) */ - memcpy(&cmd->data.setassparms.data, (void *) data, len); - - rc = qeth_send_ipa_cmd(card, iob, reply_cb, reply_param); - return rc; -} - -#ifdef CONFIG_QETH_IPV6 -static int -qeth_send_simple_setassparms_ipv6(struct qeth_card *card, - enum qeth_ipa_funcs ipa_func, __u16 cmd_code) - -{ - int rc; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(trace,4,"simassp6"); - iob = qeth_get_setassparms_cmd(card, ipa_func, cmd_code, - 0, QETH_PROT_IPV6); - rc = qeth_send_setassparms(card, iob, 0, 0, - qeth_default_setassparms_cb, NULL); - return rc; -} -#endif - -static int -qeth_send_simple_setassparms(struct qeth_card *card, - enum qeth_ipa_funcs ipa_func, - __u16 cmd_code, long data) -{ - int rc; - int length = 0; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(trace,4,"simassp4"); - if (data) - length = sizeof(__u32); - iob = qeth_get_setassparms_cmd(card, ipa_func, cmd_code, - length, QETH_PROT_IPV4); - rc = qeth_send_setassparms(card, iob, length, data, - qeth_default_setassparms_cb, NULL); - return rc; -} - -static int -qeth_start_ipa_arp_processing(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"ipaarp"); - - if (!qeth_is_supported(card,IPA_ARP_PROCESSING)) { - PRINT_WARN("ARP processing not supported " - "on %s!\n", QETH_CARD_IFNAME(card)); - return 0; - } - rc = qeth_send_simple_setassparms(card,IPA_ARP_PROCESSING, - IPA_CMD_ASS_START, 0); - if (rc) { - PRINT_WARN("Could not start ARP processing " - "assist on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - } - return rc; -} - -static int -qeth_start_ipa_ip_fragmentation(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"ipaipfrg"); - - if (!qeth_is_supported(card, IPA_IP_FRAGMENTATION)) { - PRINT_INFO("Hardware IP fragmentation not supported on %s\n", - QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - - rc = qeth_send_simple_setassparms(card, IPA_IP_FRAGMENTATION, - IPA_CMD_ASS_START, 0); - if (rc) { - PRINT_WARN("Could not start Hardware IP fragmentation " - "assist on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - } else - PRINT_INFO("Hardware IP fragmentation enabled \n"); - return rc; -} - -static int -qeth_start_ipa_source_mac(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"stsrcmac"); - - if (!card->options.fake_ll) - return -EOPNOTSUPP; - - if (!qeth_is_supported(card, IPA_SOURCE_MAC)) { - PRINT_INFO("Inbound source address not " - "supported on %s\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - - rc = qeth_send_simple_setassparms(card, IPA_SOURCE_MAC, - IPA_CMD_ASS_START, 0); - if (rc) - PRINT_WARN("Could not start inbound source " - "assist on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - return rc; -} - -static int -qeth_start_ipa_vlan(struct qeth_card *card) -{ - int rc = 0; - - QETH_DBF_TEXT(trace,3,"strtvlan"); - -#ifdef CONFIG_QETH_VLAN - if (!qeth_is_supported(card, IPA_FULL_VLAN)) { - PRINT_WARN("VLAN not supported on %s\n", QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - - rc = qeth_send_simple_setassparms(card, IPA_VLAN_PRIO, - IPA_CMD_ASS_START,0); - if (rc) { - PRINT_WARN("Could not start vlan " - "assist on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - } else { - PRINT_INFO("VLAN enabled \n"); - card->dev->features |= - NETIF_F_HW_VLAN_FILTER | - NETIF_F_HW_VLAN_TX | - NETIF_F_HW_VLAN_RX; - } -#endif /* QETH_VLAN */ - return rc; -} - -static int -qeth_start_ipa_multicast(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"stmcast"); - - if (!qeth_is_supported(card, IPA_MULTICASTING)) { - PRINT_WARN("Multicast not supported on %s\n", - QETH_CARD_IFNAME(card)); - return -EOPNOTSUPP; - } - - rc = qeth_send_simple_setassparms(card, IPA_MULTICASTING, - IPA_CMD_ASS_START,0); - if (rc) { - PRINT_WARN("Could not start multicast " - "assist on %s: rc=%i\n", - QETH_CARD_IFNAME(card), rc); - } else { - PRINT_INFO("Multicast enabled\n"); - card->dev->flags |= IFF_MULTICAST; - } - return rc; -} - -#ifdef CONFIG_QETH_IPV6 -static int -qeth_softsetup_ipv6(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"softipv6"); - - rc = qeth_send_startlan(card, QETH_PROT_IPV6); - if (rc) { - PRINT_ERR("IPv6 startlan failed on %s\n", - QETH_CARD_IFNAME(card)); - return rc; - } - rc = qeth_query_ipassists(card,QETH_PROT_IPV6); - if (rc) { - PRINT_ERR("IPv6 query ipassist failed on %s\n", - QETH_CARD_IFNAME(card)); - return rc; - } - rc = qeth_send_simple_setassparms(card, IPA_IPV6, - IPA_CMD_ASS_START, 3); - if (rc) { - PRINT_WARN("IPv6 start assist (version 4) failed " - "on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - return rc; - } - rc = qeth_send_simple_setassparms_ipv6(card, IPA_IPV6, - IPA_CMD_ASS_START); - if (rc) { - PRINT_WARN("IPV6 start assist (version 6) failed " - "on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - return rc; - } - rc = qeth_send_simple_setassparms_ipv6(card, IPA_PASSTHRU, - IPA_CMD_ASS_START); - if (rc) { - PRINT_WARN("Could not enable passthrough " - "on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - return rc; - } - PRINT_INFO("IPV6 enabled \n"); - return 0; -} - -#endif - -static int -qeth_start_ipa_ipv6(struct qeth_card *card) -{ - int rc = 0; -#ifdef CONFIG_QETH_IPV6 - QETH_DBF_TEXT(trace,3,"strtipv6"); - - if (!qeth_is_supported(card, IPA_IPV6)) { - PRINT_WARN("IPv6 not supported on %s\n", - QETH_CARD_IFNAME(card)); - return 0; - } - rc = qeth_softsetup_ipv6(card); -#endif - return rc ; -} - -static int -qeth_start_ipa_broadcast(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"stbrdcst"); - card->info.broadcast_capable = 0; - if (!qeth_is_supported(card, IPA_FILTERING)) { - PRINT_WARN("Broadcast not supported on %s\n", - QETH_CARD_IFNAME(card)); - rc = -EOPNOTSUPP; - goto out; - } - rc = qeth_send_simple_setassparms(card, IPA_FILTERING, - IPA_CMD_ASS_START, 0); - if (rc) { - PRINT_WARN("Could not enable broadcasting filtering " - "on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - goto out; - } - - rc = qeth_send_simple_setassparms(card, IPA_FILTERING, - IPA_CMD_ASS_CONFIGURE, 1); - if (rc) { - PRINT_WARN("Could not set up broadcast filtering on %s: 0x%x\n", - QETH_CARD_IFNAME(card), rc); - goto out; - } - card->info.broadcast_capable = QETH_BROADCAST_WITH_ECHO; - PRINT_INFO("Broadcast enabled \n"); - rc = qeth_send_simple_setassparms(card, IPA_FILTERING, - IPA_CMD_ASS_ENABLE, 1); - if (rc) { - PRINT_WARN("Could not set up broadcast echo filtering on " - "%s: 0x%x\n", QETH_CARD_IFNAME(card), rc); - goto out; - } - card->info.broadcast_capable = QETH_BROADCAST_WITHOUT_ECHO; -out: - if (card->info.broadcast_capable) - card->dev->flags |= IFF_BROADCAST; - else - card->dev->flags &= ~IFF_BROADCAST; - return rc; -} - -static int -qeth_send_checksum_command(struct qeth_card *card) -{ - int rc; - - rc = qeth_send_simple_setassparms(card, IPA_INBOUND_CHECKSUM, - IPA_CMD_ASS_START, 0); - if (rc) { - PRINT_WARN("Starting Inbound HW Checksumming failed on %s: " - "0x%x,\ncontinuing using Inbound SW Checksumming\n", - QETH_CARD_IFNAME(card), rc); - return rc; - } - rc = qeth_send_simple_setassparms(card, IPA_INBOUND_CHECKSUM, - IPA_CMD_ASS_ENABLE, - card->info.csum_mask); - if (rc) { - PRINT_WARN("Enabling Inbound HW Checksumming failed on %s: " - "0x%x,\ncontinuing using Inbound SW Checksumming\n", - QETH_CARD_IFNAME(card), rc); - return rc; - } - return 0; -} - -static int -qeth_start_ipa_checksum(struct qeth_card *card) -{ - int rc = 0; - - QETH_DBF_TEXT(trace,3,"strtcsum"); - - if (card->options.checksum_type == NO_CHECKSUMMING) { - PRINT_WARN("Using no checksumming on %s.\n", - QETH_CARD_IFNAME(card)); - return 0; - } - if (card->options.checksum_type == SW_CHECKSUMMING) { - PRINT_WARN("Using SW checksumming on %s.\n", - QETH_CARD_IFNAME(card)); - return 0; - } - if (!qeth_is_supported(card, IPA_INBOUND_CHECKSUM)) { - PRINT_WARN("Inbound HW Checksumming not " - "supported on %s,\ncontinuing " - "using Inbound SW Checksumming\n", - QETH_CARD_IFNAME(card)); - card->options.checksum_type = SW_CHECKSUMMING; - return 0; - } - rc = qeth_send_checksum_command(card); - if (!rc) { - PRINT_INFO("HW Checksumming (inbound) enabled \n"); - } - return rc; -} - -static int -qeth_start_ipa_tso(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"sttso"); - - if (!qeth_is_supported(card, IPA_OUTBOUND_TSO)) { - PRINT_WARN("Outbound TSO not supported on %s\n", - QETH_CARD_IFNAME(card)); - rc = -EOPNOTSUPP; - } else { - rc = qeth_send_simple_setassparms(card, IPA_OUTBOUND_TSO, - IPA_CMD_ASS_START,0); - if (rc) - PRINT_WARN("Could not start outbound TSO " - "assist on %s: rc=%i\n", - QETH_CARD_IFNAME(card), rc); - else - PRINT_INFO("Outbound TSO enabled\n"); - } - if (rc && (card->options.large_send == QETH_LARGE_SEND_TSO)){ - card->options.large_send = QETH_LARGE_SEND_NO; - card->dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | - NETIF_F_HW_CSUM); - } - return rc; -} - -static int -qeth_start_ipassists(struct qeth_card *card) -{ - QETH_DBF_TEXT(trace,3,"strtipas"); - qeth_start_ipa_arp_processing(card); /* go on*/ - qeth_start_ipa_ip_fragmentation(card); /* go on*/ - qeth_start_ipa_source_mac(card); /* go on*/ - qeth_start_ipa_vlan(card); /* go on*/ - qeth_start_ipa_multicast(card); /* go on*/ - qeth_start_ipa_ipv6(card); /* go on*/ - qeth_start_ipa_broadcast(card); /* go on*/ - qeth_start_ipa_checksum(card); /* go on*/ - qeth_start_ipa_tso(card); /* go on*/ - return 0; -} - -static int -qeth_send_setrouting(struct qeth_card *card, enum qeth_routing_types type, - enum qeth_prot_versions prot) -{ - int rc; - struct qeth_ipa_cmd *cmd; - struct qeth_cmd_buffer *iob; - - QETH_DBF_TEXT(trace,4,"setroutg"); - iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETRTG, prot); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - cmd->data.setrtg.type = (type); - rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); - - return rc; - -} - -static void -qeth_correct_routing_type(struct qeth_card *card, enum qeth_routing_types *type, - enum qeth_prot_versions prot) -{ - if (card->info.type == QETH_CARD_TYPE_IQD) { - switch (*type) { - case NO_ROUTER: - case PRIMARY_CONNECTOR: - case SECONDARY_CONNECTOR: - case MULTICAST_ROUTER: - return; - default: - goto out_inval; - } - } else { - switch (*type) { - case NO_ROUTER: - case PRIMARY_ROUTER: - case SECONDARY_ROUTER: - return; - case MULTICAST_ROUTER: - if (qeth_is_ipafunc_supported(card, prot, - IPA_OSA_MC_ROUTER)) - return; - default: - goto out_inval; - } - } -out_inval: - PRINT_WARN("Routing type '%s' not supported for interface %s.\n" - "Router status set to 'no router'.\n", - ((*type == PRIMARY_ROUTER)? "primary router" : - (*type == SECONDARY_ROUTER)? "secondary router" : - (*type == PRIMARY_CONNECTOR)? "primary connector" : - (*type == SECONDARY_CONNECTOR)? "secondary connector" : - (*type == MULTICAST_ROUTER)? "multicast router" : - "unknown"), - card->dev->name); - *type = NO_ROUTER; -} - -int -qeth_setrouting_v4(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(trace,3,"setrtg4"); - - qeth_correct_routing_type(card, &card->options.route4.type, - QETH_PROT_IPV4); - - rc = qeth_send_setrouting(card, card->options.route4.type, - QETH_PROT_IPV4); - if (rc) { - card->options.route4.type = NO_ROUTER; - PRINT_WARN("Error (0x%04x) while setting routing type on %s. " - "Type set to 'no router'.\n", - rc, QETH_CARD_IFNAME(card)); - } - return rc; -} - -int -qeth_setrouting_v6(struct qeth_card *card) -{ - int rc = 0; - - QETH_DBF_TEXT(trace,3,"setrtg6"); -#ifdef CONFIG_QETH_IPV6 - - if (!qeth_is_supported(card, IPA_IPV6)) - return 0; - qeth_correct_routing_type(card, &card->options.route6.type, - QETH_PROT_IPV6); - - rc = qeth_send_setrouting(card, card->options.route6.type, - QETH_PROT_IPV6); - if (rc) { - card->options.route6.type = NO_ROUTER; - PRINT_WARN("Error (0x%04x) while setting routing type on %s. " - "Type set to 'no router'.\n", - rc, QETH_CARD_IFNAME(card)); - } -#endif - return rc; -} - -int -qeth_set_large_send(struct qeth_card *card, enum qeth_large_send_types type) -{ - int rc = 0; - - if (card->dev == NULL) { - card->options.large_send = type; - return 0; - } - if (card->state == CARD_STATE_UP) - netif_tx_disable(card->dev); - card->options.large_send = type; - switch (card->options.large_send) { - case QETH_LARGE_SEND_EDDP: - card->dev->features |= NETIF_F_TSO | NETIF_F_SG | - NETIF_F_HW_CSUM; - break; - case QETH_LARGE_SEND_TSO: - if (qeth_is_supported(card, IPA_OUTBOUND_TSO)){ - card->dev->features |= NETIF_F_TSO | NETIF_F_SG | - NETIF_F_HW_CSUM; - } else { - PRINT_WARN("TSO not supported on %s. " - "large_send set to 'no'.\n", - card->dev->name); - card->dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | - NETIF_F_HW_CSUM); - card->options.large_send = QETH_LARGE_SEND_NO; - rc = -EOPNOTSUPP; - } - break; - default: /* includes QETH_LARGE_SEND_NO */ - card->dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | - NETIF_F_HW_CSUM); - break; - } - if (card->state == CARD_STATE_UP) - netif_wake_queue(card->dev); - return rc; -} - -/* - * softsetup card: init IPA stuff - */ -static int -qeth_softsetup_card(struct qeth_card *card) -{ - int rc; - - QETH_DBF_TEXT(setup, 2, "softsetp"); - - if ((rc = qeth_send_startlan(card, QETH_PROT_IPV4))){ - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - if (rc == 0xe080){ - PRINT_WARN("LAN on card %s if offline! " - "Waiting for STARTLAN from card.\n", - CARD_BUS_ID(card)); - card->lan_online = 0; - } - return rc; - } else - card->lan_online = 1; - if (card->info.type==QETH_CARD_TYPE_OSN) - goto out; - qeth_set_large_send(card, card->options.large_send); - if (card->options.layer2) { - card->dev->features |= - NETIF_F_HW_VLAN_FILTER | - NETIF_F_HW_VLAN_TX | - NETIF_F_HW_VLAN_RX; - card->dev->flags|=IFF_MULTICAST|IFF_BROADCAST; - card->info.broadcast_capable=1; - if ((rc = qeth_layer2_initialize(card))) { - QETH_DBF_TEXT_(setup, 2, "L2err%d", rc); - return rc; - } -#ifdef CONFIG_QETH_VLAN - qeth_layer2_process_vlans(card, 0); -#endif - goto out; - } - if ((rc = qeth_setadapter_parms(card))) - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - if ((rc = qeth_start_ipassists(card))) - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); - if ((rc = qeth_setrouting_v4(card))) - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); - if ((rc = qeth_setrouting_v6(card))) - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); -out: - netif_tx_disable(card->dev); - return 0; -} - -#ifdef CONFIG_QETH_IPV6 -static int -qeth_get_unique_id_cb(struct qeth_card *card, struct qeth_reply *reply, - unsigned long data) -{ - struct qeth_ipa_cmd *cmd; - - cmd = (struct qeth_ipa_cmd *) data; - if (cmd->hdr.return_code == 0) - card->info.unique_id = *((__u16 *) - &cmd->data.create_destroy_addr.unique_id[6]); - else { - card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | - UNIQUE_ID_NOT_BY_CARD; - PRINT_WARN("couldn't get a unique id from the card on device " - "%s (result=x%x), using default id. ipv6 " - "autoconfig on other lpars may lead to duplicate " - "ip addresses. please use manually " - "configured ones.\n", - CARD_BUS_ID(card), cmd->hdr.return_code); - } - return 0; -} -#endif - -static int -qeth_put_unique_id(struct qeth_card *card) -{ - - int rc = 0; -#ifdef CONFIG_QETH_IPV6 - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(trace,2,"puniqeid"); - - if ((card->info.unique_id & UNIQUE_ID_NOT_BY_CARD) == - UNIQUE_ID_NOT_BY_CARD) - return -1; - iob = qeth_get_ipacmd_buffer(card, IPA_CMD_DESTROY_ADDR, - QETH_PROT_IPV6); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = - card->info.unique_id; - memcpy(&cmd->data.create_destroy_addr.unique_id[0], - card->dev->dev_addr, OSA_ADDR_LEN); - rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); -#else - card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | - UNIQUE_ID_NOT_BY_CARD; -#endif - return rc; -} - -/** - * Clear IP List - */ -static void -qeth_clear_ip_list(struct qeth_card *card, int clean, int recover) -{ - struct qeth_ipaddr *addr, *tmp; - unsigned long flags; - - QETH_DBF_TEXT(trace,4,"clearip"); - spin_lock_irqsave(&card->ip_lock, flags); - /* clear todo list */ - list_for_each_entry_safe(addr, tmp, card->ip_tbd_list, entry){ - list_del(&addr->entry); - kfree(addr); - } - - while (!list_empty(&card->ip_list)) { - addr = list_entry(card->ip_list.next, - struct qeth_ipaddr, entry); - list_del_init(&addr->entry); - if (clean) { - spin_unlock_irqrestore(&card->ip_lock, flags); - qeth_deregister_addr_entry(card, addr); - spin_lock_irqsave(&card->ip_lock, flags); - } - if (!recover || addr->is_multicast) { - kfree(addr); - continue; - } - list_add_tail(&addr->entry, card->ip_tbd_list); - } - spin_unlock_irqrestore(&card->ip_lock, flags); -} - -static void -qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads, - int clear_start_mask) -{ - unsigned long flags; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - card->thread_allowed_mask = threads; - if (clear_start_mask) - card->thread_start_mask &= threads; - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - wake_up(&card->wait_q); -} - -static int -qeth_threads_running(struct qeth_card *card, unsigned long threads) -{ - unsigned long flags; - int rc = 0; - - spin_lock_irqsave(&card->thread_mask_lock, flags); - rc = (card->thread_running_mask & threads); - spin_unlock_irqrestore(&card->thread_mask_lock, flags); - return rc; -} - -static int -qeth_wait_for_threads(struct qeth_card *card, unsigned long threads) -{ - return wait_event_interruptible(card->wait_q, - qeth_threads_running(card, threads) == 0); -} - -static int -qeth_stop_card(struct qeth_card *card, int recovery_mode) -{ - int rc = 0; - - QETH_DBF_TEXT(setup ,2,"stopcard"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); - - qeth_set_allowed_threads(card, 0, 1); - if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) - return -ERESTARTSYS; - if (card->read.state == CH_STATE_UP && - card->write.state == CH_STATE_UP && - (card->state == CARD_STATE_UP)) { - if (recovery_mode && - card->info.type != QETH_CARD_TYPE_OSN) { - qeth_stop(card->dev); - } else { - rtnl_lock(); - dev_close(card->dev); - rtnl_unlock(); - } - if (!card->use_hard_stop) { - __u8 *mac = &card->dev->dev_addr[0]; - rc = qeth_layer2_send_delmac(card, mac); - QETH_DBF_TEXT_(setup, 2, "Lerr%d", rc); - if ((rc = qeth_send_stoplan(card))) - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - } - card->state = CARD_STATE_SOFTSETUP; - } - if (card->state == CARD_STATE_SOFTSETUP) { -#ifdef CONFIG_QETH_VLAN - if (card->options.layer2) - qeth_layer2_process_vlans(card, 1); -#endif - qeth_clear_ip_list(card, !card->use_hard_stop, 1); - qeth_clear_ipacmd_list(card); - card->state = CARD_STATE_HARDSETUP; - } - if (card->state == CARD_STATE_HARDSETUP) { - if ((!card->use_hard_stop) && - (!card->options.layer2)) - if ((rc = qeth_put_unique_id(card))) - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - qeth_qdio_clear_card(card, 0); - qeth_clear_qdio_buffers(card); - qeth_clear_working_pool_list(card); - card->state = CARD_STATE_DOWN; - } - if (card->state == CARD_STATE_DOWN) { - qeth_clear_cmd_buffers(&card->read); - qeth_clear_cmd_buffers(&card->write); - } - card->use_hard_stop = 0; - return rc; -} - - -static int -qeth_get_unique_id(struct qeth_card *card) -{ - int rc = 0; -#ifdef CONFIG_QETH_IPV6 - struct qeth_cmd_buffer *iob; - struct qeth_ipa_cmd *cmd; - - QETH_DBF_TEXT(setup, 2, "guniqeid"); - - if (!qeth_is_supported(card,IPA_IPV6)) { - card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | - UNIQUE_ID_NOT_BY_CARD; - return 0; - } - - iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR, - QETH_PROT_IPV6); - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); - *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = - card->info.unique_id; - - rc = qeth_send_ipa_cmd(card, iob, qeth_get_unique_id_cb, NULL); -#else - card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | - UNIQUE_ID_NOT_BY_CARD; -#endif - return rc; -} -static void -qeth_print_status_with_portname(struct qeth_card *card) -{ - char dbf_text[15]; - int i; - - sprintf(dbf_text, "%s", card->info.portname + 1); - for (i = 0; i < 8; i++) - dbf_text[i] = - (char) _ebcasc[(__u8) dbf_text[i]]; - dbf_text[8] = 0; - printk("qeth: Device %s/%s/%s is a%s card%s%s%s\n" - "with link type %s (portname: %s)\n", - CARD_RDEV_ID(card), - CARD_WDEV_ID(card), - CARD_DDEV_ID(card), - qeth_get_cardname(card), - (card->info.mcl_level[0]) ? " (level: " : "", - (card->info.mcl_level[0]) ? card->info.mcl_level : "", - (card->info.mcl_level[0]) ? ")" : "", - qeth_get_cardname_short(card), - dbf_text); - -} - -static void -qeth_print_status_no_portname(struct qeth_card *card) -{ - if (card->info.portname[0]) - printk("qeth: Device %s/%s/%s is a%s " - "card%s%s%s\nwith link type %s " - "(no portname needed by interface).\n", - CARD_RDEV_ID(card), - CARD_WDEV_ID(card), - CARD_DDEV_ID(card), - qeth_get_cardname(card), - (card->info.mcl_level[0]) ? " (level: " : "", - (card->info.mcl_level[0]) ? card->info.mcl_level : "", - (card->info.mcl_level[0]) ? ")" : "", - qeth_get_cardname_short(card)); - else - printk("qeth: Device %s/%s/%s is a%s " - "card%s%s%s\nwith link type %s.\n", - CARD_RDEV_ID(card), - CARD_WDEV_ID(card), - CARD_DDEV_ID(card), - qeth_get_cardname(card), - (card->info.mcl_level[0]) ? " (level: " : "", - (card->info.mcl_level[0]) ? card->info.mcl_level : "", - (card->info.mcl_level[0]) ? ")" : "", - qeth_get_cardname_short(card)); -} - -static void -qeth_print_status_message(struct qeth_card *card) -{ - switch (card->info.type) { - case QETH_CARD_TYPE_OSAE: - /* VM will use a non-zero first character - * to indicate a HiperSockets like reporting - * of the level OSA sets the first character to zero - * */ - if (!card->info.mcl_level[0]) { - sprintf(card->info.mcl_level,"%02x%02x", - card->info.mcl_level[2], - card->info.mcl_level[3]); - - card->info.mcl_level[QETH_MCL_LENGTH] = 0; - break; - } - /* fallthrough */ - case QETH_CARD_TYPE_IQD: - if (card->info.guestlan) { - card->info.mcl_level[0] = (char) _ebcasc[(__u8) - card->info.mcl_level[0]]; - card->info.mcl_level[1] = (char) _ebcasc[(__u8) - card->info.mcl_level[1]]; - card->info.mcl_level[2] = (char) _ebcasc[(__u8) - card->info.mcl_level[2]]; - card->info.mcl_level[3] = (char) _ebcasc[(__u8) - card->info.mcl_level[3]]; - card->info.mcl_level[QETH_MCL_LENGTH] = 0; - } - break; - default: - memset(&card->info.mcl_level[0], 0, QETH_MCL_LENGTH + 1); - } - if (card->info.portname_required) - qeth_print_status_with_portname(card); - else - qeth_print_status_no_portname(card); -} - -static int -qeth_register_netdev(struct qeth_card *card) -{ - QETH_DBF_TEXT(setup, 3, "regnetd"); - if (card->dev->reg_state != NETREG_UNINITIALIZED) - return 0; - /* sysfs magic */ - SET_NETDEV_DEV(card->dev, &card->gdev->dev); - return register_netdev(card->dev); -} - -static void -qeth_start_again(struct qeth_card *card, int recovery_mode) -{ - QETH_DBF_TEXT(setup ,2, "startag"); - - if (recovery_mode && - card->info.type != QETH_CARD_TYPE_OSN) { - qeth_open(card->dev); - } else { - rtnl_lock(); - dev_open(card->dev); - rtnl_unlock(); - } - /* this also sets saved unicast addresses */ - qeth_set_multicast_list(card->dev); -} - - -/* Layer 2 specific stuff */ -#define IGNORE_PARAM_EQ(option,value,reset_value,msg) \ - if (card->options.option == value) { \ - PRINT_ERR("%s not supported with layer 2 " \ - "functionality, ignoring option on read" \ - "channel device %s .\n",msg,CARD_RDEV_ID(card)); \ - card->options.option = reset_value; \ - } -#define IGNORE_PARAM_NEQ(option,value,reset_value,msg) \ - if (card->options.option != value) { \ - PRINT_ERR("%s not supported with layer 2 " \ - "functionality, ignoring option on read" \ - "channel device %s .\n",msg,CARD_RDEV_ID(card)); \ - card->options.option = reset_value; \ - } - - -static void qeth_make_parameters_consistent(struct qeth_card *card) -{ - - if (card->options.layer2 == 0) - return; - if (card->info.type == QETH_CARD_TYPE_OSN) - return; - if (card->info.type == QETH_CARD_TYPE_IQD) { - PRINT_ERR("Device %s does not support layer 2 functionality." \ - " Ignoring layer2 option.\n",CARD_BUS_ID(card)); - card->options.layer2 = 0; - return; - } - IGNORE_PARAM_NEQ(route4.type, NO_ROUTER, NO_ROUTER, - "Routing options are"); -#ifdef CONFIG_QETH_IPV6 - IGNORE_PARAM_NEQ(route6.type, NO_ROUTER, NO_ROUTER, - "Routing options are"); -#endif - IGNORE_PARAM_EQ(checksum_type, HW_CHECKSUMMING, - QETH_CHECKSUM_DEFAULT, - "Checksumming options are"); - IGNORE_PARAM_NEQ(broadcast_mode, QETH_TR_BROADCAST_ALLRINGS, - QETH_TR_BROADCAST_ALLRINGS, - "Broadcast mode options are"); - IGNORE_PARAM_NEQ(macaddr_mode, QETH_TR_MACADDR_NONCANONICAL, - QETH_TR_MACADDR_NONCANONICAL, - "Canonical MAC addr options are"); - IGNORE_PARAM_NEQ(fake_broadcast, 0, 0, - "Broadcast faking options are"); - IGNORE_PARAM_NEQ(add_hhlen, DEFAULT_ADD_HHLEN, - DEFAULT_ADD_HHLEN,"Option add_hhlen is"); - IGNORE_PARAM_NEQ(fake_ll, 0, 0,"Option fake_ll is"); -} - - -static int -__qeth_set_online(struct ccwgroup_device *gdev, int recovery_mode) -{ - struct qeth_card *card = gdev->dev.driver_data; - int rc = 0; - enum qeth_card_states recover_flag; - - BUG_ON(!card); - QETH_DBF_TEXT(setup ,2, "setonlin"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); - - qeth_set_allowed_threads(card, QETH_RECOVER_THREAD, 1); - if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)){ - PRINT_WARN("set_online of card %s interrupted by user!\n", - CARD_BUS_ID(card)); - return -ERESTARTSYS; - } - - recover_flag = card->state; - if ((rc = ccw_device_set_online(CARD_RDEV(card))) || - (rc = ccw_device_set_online(CARD_WDEV(card))) || - (rc = ccw_device_set_online(CARD_DDEV(card)))){ - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); - return -EIO; - } - - qeth_make_parameters_consistent(card); - - if ((rc = qeth_hardsetup_card(card))){ - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); - goto out_remove; - } - card->state = CARD_STATE_HARDSETUP; - - if (!(rc = qeth_query_ipassists(card,QETH_PROT_IPV4))) - rc = qeth_get_unique_id(card); - - if (rc && card->options.layer2 == 0) { - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); - goto out_remove; - } - qeth_print_status_message(card); - if ((rc = qeth_register_netdev(card))){ - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); - goto out_remove; - } - if ((rc = qeth_softsetup_card(card))){ - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); - goto out_remove; - } - - if ((rc = qeth_init_qdio_queues(card))){ - QETH_DBF_TEXT_(setup, 2, "6err%d", rc); - goto out_remove; - } - card->state = CARD_STATE_SOFTSETUP; - netif_carrier_on(card->dev); - - qeth_set_allowed_threads(card, 0xffffffff, 0); - if (recover_flag == CARD_STATE_RECOVER) - qeth_start_again(card, recovery_mode); - qeth_notify_processes(); - return 0; -out_remove: - card->use_hard_stop = 1; - qeth_stop_card(card, 0); - ccw_device_set_offline(CARD_DDEV(card)); - ccw_device_set_offline(CARD_WDEV(card)); - ccw_device_set_offline(CARD_RDEV(card)); - if (recover_flag == CARD_STATE_RECOVER) - card->state = CARD_STATE_RECOVER; - else - card->state = CARD_STATE_DOWN; - return -ENODEV; -} - -static int -qeth_set_online(struct ccwgroup_device *gdev) -{ - return __qeth_set_online(gdev, 0); -} - -static struct ccw_device_id qeth_ids[] = { - {CCW_DEVICE(0x1731, 0x01), .driver_info = QETH_CARD_TYPE_OSAE}, - {CCW_DEVICE(0x1731, 0x05), .driver_info = QETH_CARD_TYPE_IQD}, - {CCW_DEVICE(0x1731, 0x06), .driver_info = QETH_CARD_TYPE_OSN}, - {}, -}; -MODULE_DEVICE_TABLE(ccw, qeth_ids); - -struct device *qeth_root_dev = NULL; - -struct ccwgroup_driver qeth_ccwgroup_driver = { - .owner = THIS_MODULE, - .name = "qeth", - .driver_id = 0xD8C5E3C8, - .probe = qeth_probe_device, - .remove = qeth_remove_device, - .set_online = qeth_set_online, - .set_offline = qeth_set_offline, -}; - -struct ccw_driver qeth_ccw_driver = { - .name = "qeth", - .ids = qeth_ids, - .probe = ccwgroup_probe_ccwdev, - .remove = ccwgroup_remove_ccwdev, -}; - - -static void -qeth_unregister_dbf_views(void) -{ - if (qeth_dbf_setup) - debug_unregister(qeth_dbf_setup); - if (qeth_dbf_qerr) - debug_unregister(qeth_dbf_qerr); - if (qeth_dbf_sense) - debug_unregister(qeth_dbf_sense); - if (qeth_dbf_misc) - debug_unregister(qeth_dbf_misc); - if (qeth_dbf_data) - debug_unregister(qeth_dbf_data); - if (qeth_dbf_control) - debug_unregister(qeth_dbf_control); - if (qeth_dbf_trace) - debug_unregister(qeth_dbf_trace); -} -static int -qeth_register_dbf_views(void) -{ - qeth_dbf_setup = debug_register(QETH_DBF_SETUP_NAME, - QETH_DBF_SETUP_PAGES, - QETH_DBF_SETUP_NR_AREAS, - QETH_DBF_SETUP_LEN); - qeth_dbf_misc = debug_register(QETH_DBF_MISC_NAME, - QETH_DBF_MISC_PAGES, - QETH_DBF_MISC_NR_AREAS, - QETH_DBF_MISC_LEN); - qeth_dbf_data = debug_register(QETH_DBF_DATA_NAME, - QETH_DBF_DATA_PAGES, - QETH_DBF_DATA_NR_AREAS, - QETH_DBF_DATA_LEN); - qeth_dbf_control = debug_register(QETH_DBF_CONTROL_NAME, - QETH_DBF_CONTROL_PAGES, - QETH_DBF_CONTROL_NR_AREAS, - QETH_DBF_CONTROL_LEN); - qeth_dbf_sense = debug_register(QETH_DBF_SENSE_NAME, - QETH_DBF_SENSE_PAGES, - QETH_DBF_SENSE_NR_AREAS, - QETH_DBF_SENSE_LEN); - qeth_dbf_qerr = debug_register(QETH_DBF_QERR_NAME, - QETH_DBF_QERR_PAGES, - QETH_DBF_QERR_NR_AREAS, - QETH_DBF_QERR_LEN); - qeth_dbf_trace = debug_register(QETH_DBF_TRACE_NAME, - QETH_DBF_TRACE_PAGES, - QETH_DBF_TRACE_NR_AREAS, - QETH_DBF_TRACE_LEN); - - if ((qeth_dbf_setup == NULL) || (qeth_dbf_misc == NULL) || - (qeth_dbf_data == NULL) || (qeth_dbf_control == NULL) || - (qeth_dbf_sense == NULL) || (qeth_dbf_qerr == NULL) || - (qeth_dbf_trace == NULL)) { - qeth_unregister_dbf_views(); - return -ENOMEM; - } - debug_register_view(qeth_dbf_setup, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_setup, QETH_DBF_SETUP_LEVEL); - - debug_register_view(qeth_dbf_misc, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_misc, QETH_DBF_MISC_LEVEL); - - debug_register_view(qeth_dbf_data, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_data, QETH_DBF_DATA_LEVEL); - - debug_register_view(qeth_dbf_control, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_control, QETH_DBF_CONTROL_LEVEL); - - debug_register_view(qeth_dbf_sense, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_sense, QETH_DBF_SENSE_LEVEL); - - debug_register_view(qeth_dbf_qerr, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_qerr, QETH_DBF_QERR_LEVEL); - - debug_register_view(qeth_dbf_trace, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_trace, QETH_DBF_TRACE_LEVEL); - - return 0; -} - -#ifdef CONFIG_QETH_IPV6 -extern struct neigh_table arp_tbl; -static struct neigh_ops *arp_direct_ops; -static int (*qeth_old_arp_constructor) (struct neighbour *); - -static struct neigh_ops arp_direct_ops_template = { - .family = AF_INET, - .solicit = NULL, - .error_report = NULL, - .output = dev_queue_xmit, - .connected_output = dev_queue_xmit, - .hh_output = dev_queue_xmit, - .queue_xmit = dev_queue_xmit -}; - -static int -qeth_arp_constructor(struct neighbour *neigh) -{ - struct net_device *dev = neigh->dev; - struct in_device *in_dev; - struct neigh_parms *parms; - struct qeth_card *card; - - card = qeth_get_card_from_dev(dev); - if (card == NULL) - goto out; - if((card->options.layer2) || - (card->dev->header_ops == &qeth_fake_ops)) - goto out; - - rcu_read_lock(); - in_dev = __in_dev_get_rcu(dev); - if (in_dev == NULL) { - rcu_read_unlock(); - return -EINVAL; - } - - parms = in_dev->arp_parms; - __neigh_parms_put(neigh->parms); - neigh->parms = neigh_parms_clone(parms); - rcu_read_unlock(); - - neigh->type = inet_addr_type(&init_net, *(__be32 *) neigh->primary_key); - neigh->nud_state = NUD_NOARP; - neigh->ops = arp_direct_ops; - neigh->output = neigh->ops->queue_xmit; - return 0; -out: - return qeth_old_arp_constructor(neigh); -} -#endif /*CONFIG_QETH_IPV6*/ - -/* - * IP address takeover related functions - */ -static void -qeth_clear_ipato_list(struct qeth_card *card) -{ - struct qeth_ipato_entry *ipatoe, *tmp; - unsigned long flags; - - spin_lock_irqsave(&card->ip_lock, flags); - list_for_each_entry_safe(ipatoe, tmp, &card->ipato.entries, entry) { - list_del(&ipatoe->entry); - kfree(ipatoe); - } - spin_unlock_irqrestore(&card->ip_lock, flags); -} - -int -qeth_add_ipato_entry(struct qeth_card *card, struct qeth_ipato_entry *new) -{ - struct qeth_ipato_entry *ipatoe; - unsigned long flags; - int rc = 0; - - QETH_DBF_TEXT(trace, 2, "addipato"); - spin_lock_irqsave(&card->ip_lock, flags); - list_for_each_entry(ipatoe, &card->ipato.entries, entry){ - if (ipatoe->proto != new->proto) - continue; - if (!memcmp(ipatoe->addr, new->addr, - (ipatoe->proto == QETH_PROT_IPV4)? 4:16) && - (ipatoe->mask_bits == new->mask_bits)){ - PRINT_WARN("ipato entry already exists!\n"); - rc = -EEXIST; - break; - } - } - if (!rc) { - list_add_tail(&new->entry, &card->ipato.entries); - } - spin_unlock_irqrestore(&card->ip_lock, flags); - return rc; -} - -void -qeth_del_ipato_entry(struct qeth_card *card, enum qeth_prot_versions proto, - u8 *addr, int mask_bits) -{ - struct qeth_ipato_entry *ipatoe, *tmp; - unsigned long flags; - - QETH_DBF_TEXT(trace, 2, "delipato"); - spin_lock_irqsave(&card->ip_lock, flags); - list_for_each_entry_safe(ipatoe, tmp, &card->ipato.entries, entry){ - if (ipatoe->proto != proto) - continue; - if (!memcmp(ipatoe->addr, addr, - (proto == QETH_PROT_IPV4)? 4:16) && - (ipatoe->mask_bits == mask_bits)){ - list_del(&ipatoe->entry); - kfree(ipatoe); - } - } - spin_unlock_irqrestore(&card->ip_lock, flags); -} - -static void -qeth_convert_addr_to_bits(u8 *addr, u8 *bits, int len) -{ - int i, j; - u8 octet; - - for (i = 0; i < len; ++i){ - octet = addr[i]; - for (j = 7; j >= 0; --j){ - bits[i*8 + j] = octet & 1; - octet >>= 1; - } - } -} - -static int -qeth_is_addr_covered_by_ipato(struct qeth_card *card, struct qeth_ipaddr *addr) -{ - struct qeth_ipato_entry *ipatoe; - u8 addr_bits[128] = {0, }; - u8 ipatoe_bits[128] = {0, }; - int rc = 0; - - if (!card->ipato.enabled) - return 0; - - qeth_convert_addr_to_bits((u8 *) &addr->u, addr_bits, - (addr->proto == QETH_PROT_IPV4)? 4:16); - list_for_each_entry(ipatoe, &card->ipato.entries, entry){ - if (addr->proto != ipatoe->proto) - continue; - qeth_convert_addr_to_bits(ipatoe->addr, ipatoe_bits, - (ipatoe->proto==QETH_PROT_IPV4) ? - 4:16); - if (addr->proto == QETH_PROT_IPV4) - rc = !memcmp(addr_bits, ipatoe_bits, - min(32, ipatoe->mask_bits)); - else - rc = !memcmp(addr_bits, ipatoe_bits, - min(128, ipatoe->mask_bits)); - if (rc) - break; - } - /* invert? */ - if ((addr->proto == QETH_PROT_IPV4) && card->ipato.invert4) - rc = !rc; - else if ((addr->proto == QETH_PROT_IPV6) && card->ipato.invert6) - rc = !rc; - - return rc; -} - -/* - * VIPA related functions - */ -int -qeth_add_vipa(struct qeth_card *card, enum qeth_prot_versions proto, - const u8 *addr) -{ - struct qeth_ipaddr *ipaddr; - unsigned long flags; - int rc = 0; - - ipaddr = qeth_get_addr_buffer(proto); - if (ipaddr){ - if (proto == QETH_PROT_IPV4){ - QETH_DBF_TEXT(trace, 2, "addvipa4"); - memcpy(&ipaddr->u.a4.addr, addr, 4); - ipaddr->u.a4.mask = 0; -#ifdef CONFIG_QETH_IPV6 - } else if (proto == QETH_PROT_IPV6){ - QETH_DBF_TEXT(trace, 2, "addvipa6"); - memcpy(&ipaddr->u.a6.addr, addr, 16); - ipaddr->u.a6.pfxlen = 0; -#endif - } - ipaddr->type = QETH_IP_TYPE_VIPA; - ipaddr->set_flags = QETH_IPA_SETIP_VIPA_FLAG; - ipaddr->del_flags = QETH_IPA_DELIP_VIPA_FLAG; - } else - return -ENOMEM; - spin_lock_irqsave(&card->ip_lock, flags); - if (__qeth_address_exists_in_list(&card->ip_list, ipaddr, 0) || - __qeth_address_exists_in_list(card->ip_tbd_list, ipaddr, 0)) - rc = -EEXIST; - spin_unlock_irqrestore(&card->ip_lock, flags); - if (rc){ - PRINT_WARN("Cannot add VIPA. Address already exists!\n"); - return rc; - } - if (!qeth_add_ip(card, ipaddr)) - kfree(ipaddr); - qeth_set_ip_addr_list(card); - return rc; -} - -void -qeth_del_vipa(struct qeth_card *card, enum qeth_prot_versions proto, - const u8 *addr) -{ - struct qeth_ipaddr *ipaddr; - - ipaddr = qeth_get_addr_buffer(proto); - if (ipaddr){ - if (proto == QETH_PROT_IPV4){ - QETH_DBF_TEXT(trace, 2, "delvipa4"); - memcpy(&ipaddr->u.a4.addr, addr, 4); - ipaddr->u.a4.mask = 0; -#ifdef CONFIG_QETH_IPV6 - } else if (proto == QETH_PROT_IPV6){ - QETH_DBF_TEXT(trace, 2, "delvipa6"); - memcpy(&ipaddr->u.a6.addr, addr, 16); - ipaddr->u.a6.pfxlen = 0; -#endif - } - ipaddr->type = QETH_IP_TYPE_VIPA; - } else - return; - if (!qeth_delete_ip(card, ipaddr)) - kfree(ipaddr); - qeth_set_ip_addr_list(card); -} - -/* - * proxy ARP related functions - */ -int -qeth_add_rxip(struct qeth_card *card, enum qeth_prot_versions proto, - const u8 *addr) -{ - struct qeth_ipaddr *ipaddr; - unsigned long flags; - int rc = 0; - - ipaddr = qeth_get_addr_buffer(proto); - if (ipaddr){ - if (proto == QETH_PROT_IPV4){ - QETH_DBF_TEXT(trace, 2, "addrxip4"); - memcpy(&ipaddr->u.a4.addr, addr, 4); - ipaddr->u.a4.mask = 0; -#ifdef CONFIG_QETH_IPV6 - } else if (proto == QETH_PROT_IPV6){ - QETH_DBF_TEXT(trace, 2, "addrxip6"); - memcpy(&ipaddr->u.a6.addr, addr, 16); - ipaddr->u.a6.pfxlen = 0; -#endif - } - ipaddr->type = QETH_IP_TYPE_RXIP; - ipaddr->set_flags = QETH_IPA_SETIP_TAKEOVER_FLAG; - ipaddr->del_flags = 0; - } else - return -ENOMEM; - spin_lock_irqsave(&card->ip_lock, flags); - if (__qeth_address_exists_in_list(&card->ip_list, ipaddr, 0) || - __qeth_address_exists_in_list(card->ip_tbd_list, ipaddr, 0)) - rc = -EEXIST; - spin_unlock_irqrestore(&card->ip_lock, flags); - if (rc){ - PRINT_WARN("Cannot add RXIP. Address already exists!\n"); - return rc; - } - if (!qeth_add_ip(card, ipaddr)) - kfree(ipaddr); - qeth_set_ip_addr_list(card); - return 0; -} - -void -qeth_del_rxip(struct qeth_card *card, enum qeth_prot_versions proto, - const u8 *addr) -{ - struct qeth_ipaddr *ipaddr; - - ipaddr = qeth_get_addr_buffer(proto); - if (ipaddr){ - if (proto == QETH_PROT_IPV4){ - QETH_DBF_TEXT(trace, 2, "addrxip4"); - memcpy(&ipaddr->u.a4.addr, addr, 4); - ipaddr->u.a4.mask = 0; -#ifdef CONFIG_QETH_IPV6 - } else if (proto == QETH_PROT_IPV6){ - QETH_DBF_TEXT(trace, 2, "addrxip6"); - memcpy(&ipaddr->u.a6.addr, addr, 16); - ipaddr->u.a6.pfxlen = 0; -#endif - } - ipaddr->type = QETH_IP_TYPE_RXIP; - } else - return; - if (!qeth_delete_ip(card, ipaddr)) - kfree(ipaddr); - qeth_set_ip_addr_list(card); -} - -/** - * IP event handler - */ -static int -qeth_ip_event(struct notifier_block *this, - unsigned long event,void *ptr) -{ - struct in_ifaddr *ifa = (struct in_ifaddr *)ptr; - struct net_device *dev =(struct net_device *) ifa->ifa_dev->dev; - struct qeth_ipaddr *addr; - struct qeth_card *card; - - QETH_DBF_TEXT(trace,3,"ipevent"); - card = qeth_get_card_from_dev(dev); - if (!card) - return NOTIFY_DONE; - if (card->options.layer2) - return NOTIFY_DONE; - - addr = qeth_get_addr_buffer(QETH_PROT_IPV4); - if (addr != NULL) { - addr->u.a4.addr = ifa->ifa_address; - addr->u.a4.mask = ifa->ifa_mask; - addr->type = QETH_IP_TYPE_NORMAL; - } else - goto out; - - switch(event) { - case NETDEV_UP: - if (!qeth_add_ip(card, addr)) - kfree(addr); - break; - case NETDEV_DOWN: - if (!qeth_delete_ip(card, addr)) - kfree(addr); - break; - default: - break; - } - qeth_set_ip_addr_list(card); -out: - return NOTIFY_DONE; -} - -static struct notifier_block qeth_ip_notifier = { - qeth_ip_event, - NULL, -}; - -#ifdef CONFIG_QETH_IPV6 -/** - * IPv6 event handler - */ -static int -qeth_ip6_event(struct notifier_block *this, - unsigned long event,void *ptr) -{ - - struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr; - struct net_device *dev = (struct net_device *)ifa->idev->dev; - struct qeth_ipaddr *addr; - struct qeth_card *card; - - QETH_DBF_TEXT(trace,3,"ip6event"); - - card = qeth_get_card_from_dev(dev); - if (!card) - return NOTIFY_DONE; - if (!qeth_is_supported(card, IPA_IPV6)) - return NOTIFY_DONE; - - addr = qeth_get_addr_buffer(QETH_PROT_IPV6); - if (addr != NULL) { - memcpy(&addr->u.a6.addr, &ifa->addr, sizeof(struct in6_addr)); - addr->u.a6.pfxlen = ifa->prefix_len; - addr->type = QETH_IP_TYPE_NORMAL; - } else - goto out; - - switch(event) { - case NETDEV_UP: - if (!qeth_add_ip(card, addr)) - kfree(addr); - break; - case NETDEV_DOWN: - if (!qeth_delete_ip(card, addr)) - kfree(addr); - break; - default: - break; - } - qeth_set_ip_addr_list(card); -out: - return NOTIFY_DONE; -} - -static struct notifier_block qeth_ip6_notifier = { - qeth_ip6_event, - NULL, -}; -#endif - -static int -__qeth_reboot_event_card(struct device *dev, void *data) -{ - struct qeth_card *card; - - card = (struct qeth_card *) dev->driver_data; - qeth_clear_ip_list(card, 0, 0); - qeth_qdio_clear_card(card, 0); - qeth_clear_qdio_buffers(card); - return 0; -} - -static int -qeth_reboot_event(struct notifier_block *this, unsigned long event, void *ptr) -{ - int ret; - - ret = driver_for_each_device(&qeth_ccwgroup_driver.driver, NULL, NULL, - __qeth_reboot_event_card); - return ret ? NOTIFY_BAD : NOTIFY_DONE; -} - - -static struct notifier_block qeth_reboot_notifier = { - qeth_reboot_event, - NULL, -}; - -static int -qeth_register_notifiers(void) -{ - int r; - - QETH_DBF_TEXT(trace,5,"regnotif"); - if ((r = register_reboot_notifier(&qeth_reboot_notifier))) - return r; - if ((r = register_inetaddr_notifier(&qeth_ip_notifier))) - goto out_reboot; -#ifdef CONFIG_QETH_IPV6 - if ((r = register_inet6addr_notifier(&qeth_ip6_notifier))) - goto out_ipv4; -#endif - return 0; - -#ifdef CONFIG_QETH_IPV6 -out_ipv4: - unregister_inetaddr_notifier(&qeth_ip_notifier); -#endif -out_reboot: - unregister_reboot_notifier(&qeth_reboot_notifier); - return r; -} - -/** - * unregister all event notifiers - */ -static void -qeth_unregister_notifiers(void) -{ - - QETH_DBF_TEXT(trace,5,"unregnot"); - BUG_ON(unregister_reboot_notifier(&qeth_reboot_notifier)); - BUG_ON(unregister_inetaddr_notifier(&qeth_ip_notifier)); -#ifdef CONFIG_QETH_IPV6 - BUG_ON(unregister_inet6addr_notifier(&qeth_ip6_notifier)); -#endif /* QETH_IPV6 */ - -} - -#ifdef CONFIG_QETH_IPV6 -static int -qeth_ipv6_init(void) -{ - qeth_old_arp_constructor = arp_tbl.constructor; - write_lock_bh(&arp_tbl.lock); - arp_tbl.constructor = qeth_arp_constructor; - write_unlock_bh(&arp_tbl.lock); - - arp_direct_ops = (struct neigh_ops*) - kmalloc(sizeof(struct neigh_ops), GFP_KERNEL); - if (!arp_direct_ops) - return -ENOMEM; - - memcpy(arp_direct_ops, &arp_direct_ops_template, - sizeof(struct neigh_ops)); - - return 0; -} - -static void -qeth_ipv6_uninit(void) -{ - write_lock_bh(&arp_tbl.lock); - arp_tbl.constructor = qeth_old_arp_constructor; - write_unlock_bh(&arp_tbl.lock); - kfree(arp_direct_ops); -} -#endif /* CONFIG_QETH_IPV6 */ - -static void -qeth_sysfs_unregister(void) -{ - s390_root_dev_unregister(qeth_root_dev); - qeth_remove_driver_attributes(); - ccw_driver_unregister(&qeth_ccw_driver); - ccwgroup_driver_unregister(&qeth_ccwgroup_driver); -} - -/** - * register qeth at sysfs - */ -static int -qeth_sysfs_register(void) -{ - int rc; - - rc = ccwgroup_driver_register(&qeth_ccwgroup_driver); - if (rc) - goto out; - - rc = ccw_driver_register(&qeth_ccw_driver); - if (rc) - goto out_ccw_driver; - - rc = qeth_create_driver_attributes(); - if (rc) - goto out_qeth_attr; - - qeth_root_dev = s390_root_dev_register("qeth"); - rc = IS_ERR(qeth_root_dev) ? PTR_ERR(qeth_root_dev) : 0; - if (!rc) - goto out; - - qeth_remove_driver_attributes(); -out_qeth_attr: - ccw_driver_unregister(&qeth_ccw_driver); -out_ccw_driver: - ccwgroup_driver_unregister(&qeth_ccwgroup_driver); -out: - return rc; -} - -/*** - * init function - */ -static int __init -qeth_init(void) -{ - int rc; - - PRINT_INFO("loading %s\n", version); - - INIT_LIST_HEAD(&qeth_card_list.list); - INIT_LIST_HEAD(&qeth_notify_list); - spin_lock_init(&qeth_notify_lock); - rwlock_init(&qeth_card_list.rwlock); - - rc = qeth_register_dbf_views(); - if (rc) - goto out_err; - - rc = qeth_sysfs_register(); - if (rc) - goto out_dbf; - -#ifdef CONFIG_QETH_IPV6 - rc = qeth_ipv6_init(); - if (rc) { - PRINT_ERR("Out of memory during ipv6 init code = %d\n", rc); - goto out_sysfs; - } -#endif /* QETH_IPV6 */ - rc = qeth_register_notifiers(); - if (rc) - goto out_ipv6; - rc = qeth_create_procfs_entries(); - if (rc) - goto out_notifiers; - - return rc; - -out_notifiers: - qeth_unregister_notifiers(); -out_ipv6: -#ifdef CONFIG_QETH_IPV6 - qeth_ipv6_uninit(); -out_sysfs: -#endif /* QETH_IPV6 */ - qeth_sysfs_unregister(); -out_dbf: - qeth_unregister_dbf_views(); -out_err: - PRINT_ERR("Initialization failed with code %d\n", rc); - return rc; -} - -static void -__exit qeth_exit(void) -{ - struct qeth_card *card, *tmp; - unsigned long flags; - - QETH_DBF_TEXT(trace,1, "cleanup."); - - /* - * Weed would not need to clean up our devices here, because the - * common device layer calls qeth_remove_device for each device - * as soon as we unregister our driver (done in qeth_sysfs_unregister). - * But we do cleanup here so we can do a "soft" shutdown of our cards. - * qeth_remove_device called by the common device layer would otherwise - * do a "hard" shutdown (card->use_hard_stop is set to one in - * qeth_remove_device). - */ -again: - read_lock_irqsave(&qeth_card_list.rwlock, flags); - list_for_each_entry_safe(card, tmp, &qeth_card_list.list, list){ - read_unlock_irqrestore(&qeth_card_list.rwlock, flags); - qeth_set_offline(card->gdev); - qeth_remove_device(card->gdev); - goto again; - } - read_unlock_irqrestore(&qeth_card_list.rwlock, flags); -#ifdef CONFIG_QETH_IPV6 - qeth_ipv6_uninit(); -#endif - qeth_unregister_notifiers(); - qeth_remove_procfs_entries(); - qeth_sysfs_unregister(); - qeth_unregister_dbf_views(); - printk("qeth: removed\n"); -} - -EXPORT_SYMBOL(qeth_osn_register); -EXPORT_SYMBOL(qeth_osn_deregister); -EXPORT_SYMBOL(qeth_osn_assist); -module_init(qeth_init); -module_exit(qeth_exit); -MODULE_AUTHOR("Frank Pavlic "); -MODULE_DESCRIPTION("Linux on zSeries OSA Express and HiperSockets support\n" \ - "Copyright 2000,2003 IBM Corporation\n"); - -MODULE_LICENSE("GPL"); diff --git a/drivers/s390/net/qeth_mpc.c b/drivers/s390/net/qeth_mpc.c deleted file mode 100644 index f29a4bc4f6f2..000000000000 --- a/drivers/s390/net/qeth_mpc.c +++ /dev/null @@ -1,269 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_mpc.c - * - * Linux on zSeries OSA Express and HiperSockets support - * - * Copyright 2000,2003 IBM Corporation - * Author(s): Frank Pavlic - * Thomas Spatzier - * - */ -#include -#include "qeth_mpc.h" - -unsigned char IDX_ACTIVATE_READ[]={ - 0x00,0x00,0x80,0x00, 0x00,0x00,0x00,0x00, - 0x19,0x01,0x01,0x80, 0x00,0x00,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x00,0xc8,0xc1, - 0xd3,0xd3,0xd6,0xd3, 0xc5,0x40,0x00,0x00, - 0x00,0x00 -}; - -unsigned char IDX_ACTIVATE_WRITE[]={ - 0x00,0x00,0x80,0x00, 0x00,0x00,0x00,0x00, - 0x15,0x01,0x01,0x80, 0x00,0x00,0x00,0x00, - 0xff,0xff,0x00,0x00, 0x00,0x00,0xc8,0xc1, - 0xd3,0xd3,0xd6,0xd3, 0xc5,0x40,0x00,0x00, - 0x00,0x00 -}; - -unsigned char CM_ENABLE[]={ - 0x00,0xe0,0x00,0x00, 0x00,0x00,0x00,0x01, - 0x00,0x00,0x00,0x14, 0x00,0x00,0x00,0x63, - 0x10,0x00,0x00,0x01, - 0x00,0x00,0x00,0x00, - 0x81,0x7e,0x00,0x01, 0x00,0x00,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x24,0x00,0x23, - 0x00,0x00,0x23,0x05, 0x00,0x00,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00, - 0x01,0x00,0x00,0x23, 0x00,0x00,0x00,0x40, - 0x00,0x0c,0x41,0x02, 0x00,0x17,0x00,0x00, - 0x00,0x00,0x00,0x00, - 0x00,0x0b,0x04,0x01, - 0x7e,0x04,0x05,0x00, 0x01,0x01,0x0f, - 0x00, - 0x0c,0x04,0x02,0xff, 0xff,0xff,0xff,0xff, - 0xff,0xff,0xff -}; - -unsigned char CM_SETUP[]={ - 0x00,0xe0,0x00,0x00, 0x00,0x00,0x00,0x02, - 0x00,0x00,0x00,0x14, 0x00,0x00,0x00,0x64, - 0x10,0x00,0x00,0x01, - 0x00,0x00,0x00,0x00, - 0x81,0x7e,0x00,0x01, 0x00,0x00,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x24,0x00,0x24, - 0x00,0x00,0x24,0x05, 0x00,0x00,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00, - 0x01,0x00,0x00,0x24, 0x00,0x00,0x00,0x40, - 0x00,0x0c,0x41,0x04, 0x00,0x18,0x00,0x00, - 0x00,0x00,0x00,0x00, - 0x00,0x09,0x04,0x04, - 0x05,0x00,0x01,0x01, 0x11, - 0x00,0x09,0x04, - 0x05,0x05,0x00,0x00, 0x00,0x00, - 0x00,0x06, - 0x04,0x06,0xc8,0x00 -}; - -unsigned char ULP_ENABLE[]={ - 0x00,0xe0,0x00,0x00, 0x00,0x00,0x00,0x03, - 0x00,0x00,0x00,0x14, 0x00,0x00,0x00,0x6b, - 0x10,0x00,0x00,0x01, - 0x00,0x00,0x00,0x00, - 0x41,0x7e,0x00,0x01, 0x00,0x00,0x00,0x01, - 0x00,0x00,0x00,0x00, 0x00,0x24,0x00,0x2b, - 0x00,0x00,0x2b,0x05, 0x20,0x01,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00, - 0x01,0x00,0x00,0x2b, 0x00,0x00,0x00,0x40, - 0x00,0x0c,0x41,0x02, 0x00,0x1f,0x00,0x00, - 0x00,0x00,0x00,0x00, - 0x00,0x0b,0x04,0x01, - 0x03,0x04,0x05,0x00, 0x01,0x01,0x12, - 0x00, - 0x14,0x04,0x0a,0x00, 0x20,0x00,0x00,0xff, - 0xff,0x00,0x08,0xc8, 0xe8,0xc4,0xf1,0xc7, - 0xf1,0x00,0x00 -}; - -unsigned char ULP_SETUP[]={ - 0x00,0xe0,0x00,0x00, 0x00,0x00,0x00,0x04, - 0x00,0x00,0x00,0x14, 0x00,0x00,0x00,0x6c, - 0x10,0x00,0x00,0x01, - 0x00,0x00,0x00,0x00, - 0x41,0x7e,0x00,0x01, 0x00,0x00,0x00,0x02, - 0x00,0x00,0x00,0x01, 0x00,0x24,0x00,0x2c, - 0x00,0x00,0x2c,0x05, 0x20,0x01,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00, - 0x01,0x00,0x00,0x2c, 0x00,0x00,0x00,0x40, - 0x00,0x0c,0x41,0x04, 0x00,0x20,0x00,0x00, - 0x00,0x00,0x00,0x00, - 0x00,0x09,0x04,0x04, - 0x05,0x00,0x01,0x01, 0x14, - 0x00,0x09,0x04, - 0x05,0x05,0x30,0x01, 0x00,0x00, - 0x00,0x06, - 0x04,0x06,0x40,0x00, - 0x00,0x08,0x04,0x0b, - 0x00,0x00,0x00,0x00 -}; - -unsigned char DM_ACT[]={ - 0x00,0xe0,0x00,0x00, 0x00,0x00,0x00,0x05, - 0x00,0x00,0x00,0x14, 0x00,0x00,0x00,0x55, - 0x10,0x00,0x00,0x01, - 0x00,0x00,0x00,0x00, - 0x41,0x7e,0x00,0x01, 0x00,0x00,0x00,0x03, - 0x00,0x00,0x00,0x02, 0x00,0x24,0x00,0x15, - 0x00,0x00,0x2c,0x05, 0x20,0x01,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00, - 0x01,0x00,0x00,0x15, 0x00,0x00,0x00,0x40, - 0x00,0x0c,0x43,0x60, 0x00,0x09,0x00,0x00, - 0x00,0x00,0x00,0x00, - 0x00,0x09,0x04,0x04, - 0x05,0x40,0x01,0x01, 0x00 -}; - -unsigned char IPA_PDU_HEADER[]={ - 0x00,0xe0,0x00,0x00, 0x77,0x77,0x77,0x77, - 0x00,0x00,0x00,0x14, 0x00,0x00, - (IPA_PDU_HEADER_SIZE+sizeof(struct qeth_ipa_cmd))/256, - (IPA_PDU_HEADER_SIZE+sizeof(struct qeth_ipa_cmd))%256, - 0x10,0x00,0x00,0x01, 0x00,0x00,0x00,0x00, - 0xc1,0x03,0x00,0x01, 0x00,0x00,0x00,0x00, - 0x00,0x00,0x00,0x00, 0x00,0x24, - sizeof(struct qeth_ipa_cmd)/256, - sizeof(struct qeth_ipa_cmd)%256, - 0x00, - sizeof(struct qeth_ipa_cmd)/256, - sizeof(struct qeth_ipa_cmd)%256, - 0x05, - 0x77,0x77,0x77,0x77, - 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00, - 0x01,0x00, - sizeof(struct qeth_ipa_cmd)/256, - sizeof(struct qeth_ipa_cmd)%256, - 0x00,0x00,0x00,0x40, -}; - -unsigned char WRITE_CCW[]={ - 0x01,CCW_FLAG_SLI,0,0, - 0,0,0,0 -}; - -unsigned char READ_CCW[]={ - 0x02,CCW_FLAG_SLI,0,0, - 0,0,0,0 -}; - - -struct ipa_rc_msg { - enum qeth_ipa_return_codes rc; - char *msg; -}; - -static struct ipa_rc_msg qeth_ipa_rc_msg[] = { - {IPA_RC_SUCCESS, "success"}, - {IPA_RC_NOTSUPP, "Command not supported"}, - {IPA_RC_IP_TABLE_FULL, "Add Addr IP Table Full - ipv6"}, - {IPA_RC_UNKNOWN_ERROR, "IPA command failed - reason unknown"}, - {IPA_RC_UNSUPPORTED_COMMAND, "Command not supported"}, - {IPA_RC_DUP_IPV6_REMOTE,"ipv6 address already registered remote"}, - {IPA_RC_DUP_IPV6_HOME, "ipv6 address already registered"}, - {IPA_RC_UNREGISTERED_ADDR, "Address not registered"}, - {IPA_RC_NO_ID_AVAILABLE, "No identifiers available"}, - {IPA_RC_ID_NOT_FOUND, "Identifier not found"}, - {IPA_RC_INVALID_IP_VERSION, "IP version incorrect"}, - {IPA_RC_LAN_FRAME_MISMATCH, "LAN and frame mismatch"}, - {IPA_RC_L2_UNSUPPORTED_CMD, "Unsupported layer 2 command"}, - {IPA_RC_L2_DUP_MAC, "Duplicate MAC address"}, - {IPA_RC_L2_ADDR_TABLE_FULL, "Layer2 address table full"}, - {IPA_RC_L2_DUP_LAYER3_MAC, "Duplicate with layer 3 MAC"}, - {IPA_RC_L2_GMAC_NOT_FOUND, "GMAC not found"}, - {IPA_RC_L2_MAC_NOT_FOUND, "L2 mac address not found"}, - {IPA_RC_L2_INVALID_VLAN_ID, "L2 invalid vlan id"}, - {IPA_RC_L2_DUP_VLAN_ID, "L2 duplicate vlan id"}, - {IPA_RC_L2_VLAN_ID_NOT_FOUND, "L2 vlan id not found"}, - {IPA_RC_DATA_MISMATCH, "Data field mismatch (v4/v6 mixed)"}, - {IPA_RC_INVALID_MTU_SIZE, "Invalid MTU size"}, - {IPA_RC_INVALID_LANTYPE, "Invalid LAN type"}, - {IPA_RC_INVALID_LANNUM, "Invalid LAN num"}, - {IPA_RC_DUPLICATE_IP_ADDRESS, "Address already registered"}, - {IPA_RC_IP_ADDR_TABLE_FULL, "IP address table full"}, - {IPA_RC_LAN_PORT_STATE_ERROR, "LAN port state error"}, - {IPA_RC_SETIP_NO_STARTLAN, "Setip no startlan received"}, - {IPA_RC_SETIP_ALREADY_RECEIVED, "Setip already received"}, - {IPA_RC_IP_ADDR_ALREADY_USED, "IP address already in use on LAN"}, - {IPA_RC_MULTICAST_FULL, "No task available, multicast full"}, - {IPA_RC_SETIP_INVALID_VERSION, "SETIP invalid IP version"}, - {IPA_RC_UNSUPPORTED_SUBCMD, "Unsupported assist subcommand"}, - {IPA_RC_ARP_ASSIST_NO_ENABLE, "Only partial success, no enable"}, - {IPA_RC_PRIMARY_ALREADY_DEFINED,"Primary already defined"}, - {IPA_RC_SECOND_ALREADY_DEFINED, "Secondary already defined"}, - {IPA_RC_INVALID_SETRTG_INDICATOR,"Invalid SETRTG indicator"}, - {IPA_RC_MC_ADDR_ALREADY_DEFINED,"Multicast address already defined"}, - {IPA_RC_LAN_OFFLINE, "STRTLAN_LAN_DISABLED - LAN offline"}, - {IPA_RC_INVALID_IP_VERSION2, "Invalid IP version"}, - {IPA_RC_FFFF, "Unknown Error"} -}; - - - -char * -qeth_get_ipa_msg(enum qeth_ipa_return_codes rc) -{ - int x = 0; - qeth_ipa_rc_msg[sizeof(qeth_ipa_rc_msg) / - sizeof(struct ipa_rc_msg) - 1].rc = rc; - while(qeth_ipa_rc_msg[x].rc != rc) - x++; - return qeth_ipa_rc_msg[x].msg; -} - - -struct ipa_cmd_names { - enum qeth_ipa_cmds cmd; - char *name; -}; - -static struct ipa_cmd_names qeth_ipa_cmd_names[] = { - {IPA_CMD_STARTLAN, "startlan"}, - {IPA_CMD_STOPLAN, "stoplan"}, - {IPA_CMD_SETVMAC, "setvmac"}, - {IPA_CMD_DELVMAC, "delvmca"}, - {IPA_CMD_SETGMAC, "setgmac"}, - {IPA_CMD_DELGMAC, "delgmac"}, - {IPA_CMD_SETVLAN, "setvlan"}, - {IPA_CMD_DELVLAN, "delvlan"}, - {IPA_CMD_SETCCID, "setccid"}, - {IPA_CMD_DELCCID, "delccid"}, - {IPA_CMD_MODCCID, "setip"}, - {IPA_CMD_SETIP, "setip"}, - {IPA_CMD_QIPASSIST, "qipassist"}, - {IPA_CMD_SETASSPARMS, "setassparms"}, - {IPA_CMD_SETIPM, "setipm"}, - {IPA_CMD_DELIPM, "delipm"}, - {IPA_CMD_SETRTG, "setrtg"}, - {IPA_CMD_DELIP, "delip"}, - {IPA_CMD_SETADAPTERPARMS, "setadapterparms"}, - {IPA_CMD_SET_DIAG_ASS, "set_diag_ass"}, - {IPA_CMD_CREATE_ADDR, "create_addr"}, - {IPA_CMD_DESTROY_ADDR, "destroy_addr"}, - {IPA_CMD_REGISTER_LOCAL_ADDR, "register_local_addr"}, - {IPA_CMD_UNREGISTER_LOCAL_ADDR, "unregister_local_addr"}, - {IPA_CMD_UNKNOWN, "unknown"}, -}; - -char * -qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd) -{ - int x = 0; - qeth_ipa_cmd_names[ - sizeof(qeth_ipa_cmd_names)/ - sizeof(struct ipa_cmd_names)-1].cmd = cmd; - while(qeth_ipa_cmd_names[x].cmd != cmd) - x++; - return qeth_ipa_cmd_names[x].name; -} - - diff --git a/drivers/s390/net/qeth_mpc.h b/drivers/s390/net/qeth_mpc.h deleted file mode 100644 index 6de2da5ed5fd..000000000000 --- a/drivers/s390/net/qeth_mpc.h +++ /dev/null @@ -1,583 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_mpc.h - * - * Linux on zSeries OSA Express and HiperSockets support - * - * Copyright 2000,2003 IBM Corporation - * Author(s): Utz Bacher - * Thomas Spatzier - * Frank Pavlic - * - */ -#ifndef __QETH_MPC_H__ -#define __QETH_MPC_H__ - -#include - -#define IPA_PDU_HEADER_SIZE 0x40 -#define QETH_IPA_PDU_LEN_TOTAL(buffer) (buffer+0x0e) -#define QETH_IPA_PDU_LEN_PDU1(buffer) (buffer+0x26) -#define QETH_IPA_PDU_LEN_PDU2(buffer) (buffer+0x29) -#define QETH_IPA_PDU_LEN_PDU3(buffer) (buffer+0x3a) - -extern unsigned char IPA_PDU_HEADER[]; -#define QETH_IPA_CMD_DEST_ADDR(buffer) (buffer+0x2c) - -#define IPA_CMD_LENGTH (IPA_PDU_HEADER_SIZE + sizeof(struct qeth_ipa_cmd)) - -#define QETH_SEQ_NO_LENGTH 4 -#define QETH_MPC_TOKEN_LENGTH 4 -#define QETH_MCL_LENGTH 4 -#define OSA_ADDR_LEN 6 - -#define QETH_TIMEOUT (10 * HZ) -#define QETH_IPA_TIMEOUT (45 * HZ) -#define QETH_IDX_COMMAND_SEQNO 0xffff0000 -#define SR_INFO_LEN 16 - -#define QETH_CLEAR_CHANNEL_PARM -10 -#define QETH_HALT_CHANNEL_PARM -11 -#define QETH_RCD_PARM -12 - -/*****************************************************************************/ -/* IP Assist related definitions */ -/*****************************************************************************/ -#define IPA_CMD_INITIATOR_HOST 0x00 -#define IPA_CMD_INITIATOR_OSA 0x01 -#define IPA_CMD_INITIATOR_HOST_REPLY 0x80 -#define IPA_CMD_INITIATOR_OSA_REPLY 0x81 -#define IPA_CMD_PRIM_VERSION_NO 0x01 - -enum qeth_card_types { - QETH_CARD_TYPE_UNKNOWN = 0, - QETH_CARD_TYPE_OSAE = 10, - QETH_CARD_TYPE_IQD = 1234, - QETH_CARD_TYPE_OSN = 11, -}; - -#define QETH_MPC_DIFINFO_LEN_INDICATES_LINK_TYPE 0x18 -/* only the first two bytes are looked at in qeth_get_cardname_short */ -enum qeth_link_types { - QETH_LINK_TYPE_FAST_ETH = 0x01, - QETH_LINK_TYPE_HSTR = 0x02, - QETH_LINK_TYPE_GBIT_ETH = 0x03, - QETH_LINK_TYPE_OSN = 0x04, - QETH_LINK_TYPE_10GBIT_ETH = 0x10, - QETH_LINK_TYPE_LANE_ETH100 = 0x81, - QETH_LINK_TYPE_LANE_TR = 0x82, - QETH_LINK_TYPE_LANE_ETH1000 = 0x83, - QETH_LINK_TYPE_LANE = 0x88, - QETH_LINK_TYPE_ATM_NATIVE = 0x90, -}; - -enum qeth_tr_macaddr_modes { - QETH_TR_MACADDR_NONCANONICAL = 0, - QETH_TR_MACADDR_CANONICAL = 1, -}; - -enum qeth_tr_broadcast_modes { - QETH_TR_BROADCAST_ALLRINGS = 0, - QETH_TR_BROADCAST_LOCAL = 1, -}; - -/* these values match CHECKSUM_* in include/linux/skbuff.h */ -enum qeth_checksum_types { - SW_CHECKSUMMING = 0, /* TODO: set to bit flag used in IPA Command */ - HW_CHECKSUMMING = 1, - NO_CHECKSUMMING = 2, -}; -#define QETH_CHECKSUM_DEFAULT SW_CHECKSUMMING - -/* - * Routing stuff - */ -#define RESET_ROUTING_FLAG 0x10 /* indicate that routing type shall be set */ -enum qeth_routing_types { - NO_ROUTER = 0, /* TODO: set to bit flag used in IPA Command */ - PRIMARY_ROUTER = 1, - SECONDARY_ROUTER = 2, - MULTICAST_ROUTER = 3, - PRIMARY_CONNECTOR = 4, - SECONDARY_CONNECTOR = 5, -}; - -/* IPA Commands */ -enum qeth_ipa_cmds { - IPA_CMD_STARTLAN = 0x01, - IPA_CMD_STOPLAN = 0x02, - IPA_CMD_SETVMAC = 0x21, - IPA_CMD_DELVMAC = 0x22, - IPA_CMD_SETGMAC = 0x23, - IPA_CMD_DELGMAC = 0x24, - IPA_CMD_SETVLAN = 0x25, - IPA_CMD_DELVLAN = 0x26, - IPA_CMD_SETCCID = 0x41, - IPA_CMD_DELCCID = 0x42, - IPA_CMD_MODCCID = 0x43, - IPA_CMD_SETIP = 0xb1, - IPA_CMD_QIPASSIST = 0xb2, - IPA_CMD_SETASSPARMS = 0xb3, - IPA_CMD_SETIPM = 0xb4, - IPA_CMD_DELIPM = 0xb5, - IPA_CMD_SETRTG = 0xb6, - IPA_CMD_DELIP = 0xb7, - IPA_CMD_SETADAPTERPARMS = 0xb8, - IPA_CMD_SET_DIAG_ASS = 0xb9, - IPA_CMD_CREATE_ADDR = 0xc3, - IPA_CMD_DESTROY_ADDR = 0xc4, - IPA_CMD_REGISTER_LOCAL_ADDR = 0xd1, - IPA_CMD_UNREGISTER_LOCAL_ADDR = 0xd2, - IPA_CMD_UNKNOWN = 0x00 -}; - -enum qeth_ip_ass_cmds { - IPA_CMD_ASS_START = 0x0001, - IPA_CMD_ASS_STOP = 0x0002, - IPA_CMD_ASS_CONFIGURE = 0x0003, - IPA_CMD_ASS_ENABLE = 0x0004, -}; - -enum qeth_arp_process_subcmds { - IPA_CMD_ASS_ARP_SET_NO_ENTRIES = 0x0003, - IPA_CMD_ASS_ARP_QUERY_CACHE = 0x0004, - IPA_CMD_ASS_ARP_ADD_ENTRY = 0x0005, - IPA_CMD_ASS_ARP_REMOVE_ENTRY = 0x0006, - IPA_CMD_ASS_ARP_FLUSH_CACHE = 0x0007, - IPA_CMD_ASS_ARP_QUERY_INFO = 0x0104, - IPA_CMD_ASS_ARP_QUERY_STATS = 0x0204, -}; - - -/* Return Codes for IPA Commands - * according to OSA card Specs */ - -enum qeth_ipa_return_codes { - IPA_RC_SUCCESS = 0x0000, - IPA_RC_NOTSUPP = 0x0001, - IPA_RC_IP_TABLE_FULL = 0x0002, - IPA_RC_UNKNOWN_ERROR = 0x0003, - IPA_RC_UNSUPPORTED_COMMAND = 0x0004, - IPA_RC_DUP_IPV6_REMOTE = 0x0008, - IPA_RC_DUP_IPV6_HOME = 0x0010, - IPA_RC_UNREGISTERED_ADDR = 0x0011, - IPA_RC_NO_ID_AVAILABLE = 0x0012, - IPA_RC_ID_NOT_FOUND = 0x0013, - IPA_RC_INVALID_IP_VERSION = 0x0020, - IPA_RC_LAN_FRAME_MISMATCH = 0x0040, - IPA_RC_L2_UNSUPPORTED_CMD = 0x2003, - IPA_RC_L2_DUP_MAC = 0x2005, - IPA_RC_L2_ADDR_TABLE_FULL = 0x2006, - IPA_RC_L2_DUP_LAYER3_MAC = 0x200a, - IPA_RC_L2_GMAC_NOT_FOUND = 0x200b, - IPA_RC_L2_MAC_NOT_FOUND = 0x2010, - IPA_RC_L2_INVALID_VLAN_ID = 0x2015, - IPA_RC_L2_DUP_VLAN_ID = 0x2016, - IPA_RC_L2_VLAN_ID_NOT_FOUND = 0x2017, - IPA_RC_DATA_MISMATCH = 0xe001, - IPA_RC_INVALID_MTU_SIZE = 0xe002, - IPA_RC_INVALID_LANTYPE = 0xe003, - IPA_RC_INVALID_LANNUM = 0xe004, - IPA_RC_DUPLICATE_IP_ADDRESS = 0xe005, - IPA_RC_IP_ADDR_TABLE_FULL = 0xe006, - IPA_RC_LAN_PORT_STATE_ERROR = 0xe007, - IPA_RC_SETIP_NO_STARTLAN = 0xe008, - IPA_RC_SETIP_ALREADY_RECEIVED = 0xe009, - IPA_RC_IP_ADDR_ALREADY_USED = 0xe00a, - IPA_RC_MULTICAST_FULL = 0xe00b, - IPA_RC_SETIP_INVALID_VERSION = 0xe00d, - IPA_RC_UNSUPPORTED_SUBCMD = 0xe00e, - IPA_RC_ARP_ASSIST_NO_ENABLE = 0xe00f, - IPA_RC_PRIMARY_ALREADY_DEFINED = 0xe010, - IPA_RC_SECOND_ALREADY_DEFINED = 0xe011, - IPA_RC_INVALID_SETRTG_INDICATOR = 0xe012, - IPA_RC_MC_ADDR_ALREADY_DEFINED = 0xe013, - IPA_RC_LAN_OFFLINE = 0xe080, - IPA_RC_INVALID_IP_VERSION2 = 0xf001, - IPA_RC_FFFF = 0xffff -}; - -/* IPA function flags; each flag marks availability of respective function */ -enum qeth_ipa_funcs { - IPA_ARP_PROCESSING = 0x00000001L, - IPA_INBOUND_CHECKSUM = 0x00000002L, - IPA_OUTBOUND_CHECKSUM = 0x00000004L, - IPA_IP_FRAGMENTATION = 0x00000008L, - IPA_FILTERING = 0x00000010L, - IPA_IPV6 = 0x00000020L, - IPA_MULTICASTING = 0x00000040L, - IPA_IP_REASSEMBLY = 0x00000080L, - IPA_QUERY_ARP_COUNTERS = 0x00000100L, - IPA_QUERY_ARP_ADDR_INFO = 0x00000200L, - IPA_SETADAPTERPARMS = 0x00000400L, - IPA_VLAN_PRIO = 0x00000800L, - IPA_PASSTHRU = 0x00001000L, - IPA_FLUSH_ARP_SUPPORT = 0x00002000L, - IPA_FULL_VLAN = 0x00004000L, - IPA_INBOUND_PASSTHRU = 0x00008000L, - IPA_SOURCE_MAC = 0x00010000L, - IPA_OSA_MC_ROUTER = 0x00020000L, - IPA_QUERY_ARP_ASSIST = 0x00040000L, - IPA_INBOUND_TSO = 0x00080000L, - IPA_OUTBOUND_TSO = 0x00100000L, -}; - -/* SETIP/DELIP IPA Command: ***************************************************/ -enum qeth_ipa_setdelip_flags { - QETH_IPA_SETDELIP_DEFAULT = 0x00L, /* default */ - QETH_IPA_SETIP_VIPA_FLAG = 0x01L, /* no grat. ARP */ - QETH_IPA_SETIP_TAKEOVER_FLAG = 0x02L, /* nofail on grat. ARP */ - QETH_IPA_DELIP_ADDR_2_B_TAKEN_OVER = 0x20L, - QETH_IPA_DELIP_VIPA_FLAG = 0x40L, - QETH_IPA_DELIP_ADDR_NEEDS_SETIP = 0x80L, -}; - -/* SETADAPTER IPA Command: ****************************************************/ -enum qeth_ipa_setadp_cmd { - IPA_SETADP_QUERY_COMMANDS_SUPPORTED = 0x01, - IPA_SETADP_ALTER_MAC_ADDRESS = 0x02, - IPA_SETADP_ADD_DELETE_GROUP_ADDRESS = 0x04, - IPA_SETADP_ADD_DELETE_FUNCTIONAL_ADDR = 0x08, - IPA_SETADP_SET_ADDRESSING_MODE = 0x10, - IPA_SETADP_SET_CONFIG_PARMS = 0x20, - IPA_SETADP_SET_CONFIG_PARMS_EXTENDED = 0x40, - IPA_SETADP_SET_BROADCAST_MODE = 0x80, - IPA_SETADP_SEND_OSA_MESSAGE = 0x0100, - IPA_SETADP_SET_SNMP_CONTROL = 0x0200, - IPA_SETADP_QUERY_CARD_INFO = 0x0400, - IPA_SETADP_SET_PROMISC_MODE = 0x0800, -}; -enum qeth_ipa_mac_ops { - CHANGE_ADDR_READ_MAC = 0, - CHANGE_ADDR_REPLACE_MAC = 1, - CHANGE_ADDR_ADD_MAC = 2, - CHANGE_ADDR_DEL_MAC = 4, - CHANGE_ADDR_RESET_MAC = 8, -}; -enum qeth_ipa_addr_ops { - CHANGE_ADDR_READ_ADDR = 0, - CHANGE_ADDR_ADD_ADDR = 1, - CHANGE_ADDR_DEL_ADDR = 2, - CHANGE_ADDR_FLUSH_ADDR_TABLE = 4, -}; -enum qeth_ipa_promisc_modes { - SET_PROMISC_MODE_OFF = 0, - SET_PROMISC_MODE_ON = 1, -}; - -/* (SET)DELIP(M) IPA stuff ***************************************************/ -struct qeth_ipacmd_setdelip4 { - __u8 ip_addr[4]; - __u8 mask[4]; - __u32 flags; -} __attribute__ ((packed)); - -struct qeth_ipacmd_setdelip6 { - __u8 ip_addr[16]; - __u8 mask[16]; - __u32 flags; -} __attribute__ ((packed)); - -struct qeth_ipacmd_setdelipm { - __u8 mac[6]; - __u8 padding[2]; - __u8 ip6[12]; - __u8 ip4[4]; -} __attribute__ ((packed)); - -struct qeth_ipacmd_layer2setdelmac { - __u32 mac_length; - __u8 mac[6]; -} __attribute__ ((packed)); - -struct qeth_ipacmd_layer2setdelvlan { - __u16 vlan_id; -} __attribute__ ((packed)); - - -struct qeth_ipacmd_setassparms_hdr { - __u32 assist_no; - __u16 length; - __u16 command_code; - __u16 return_code; - __u8 number_of_replies; - __u8 seq_no; -} __attribute__((packed)); - -struct qeth_arp_query_data { - __u16 request_bits; - __u16 reply_bits; - __u32 no_entries; - char data; -} __attribute__((packed)); - -/* used as parameter for arp_query reply */ -struct qeth_arp_query_info { - __u32 udata_len; - __u16 mask_bits; - __u32 udata_offset; - __u32 no_entries; - char *udata; -}; - -/* SETASSPARMS IPA Command: */ -struct qeth_ipacmd_setassparms { - struct qeth_ipacmd_setassparms_hdr hdr; - union { - __u32 flags_32bit; - struct qeth_arp_cache_entry add_arp_entry; - struct qeth_arp_query_data query_arp; - __u8 ip[16]; - } data; -} __attribute__ ((packed)); - - -/* SETRTG IPA Command: ****************************************************/ -struct qeth_set_routing { - __u8 type; -}; - -/* SETADAPTERPARMS IPA Command: *******************************************/ -struct qeth_query_cmds_supp { - __u32 no_lantypes_supp; - __u8 lan_type; - __u8 reserved1[3]; - __u32 supported_cmds; - __u8 reserved2[8]; -} __attribute__ ((packed)); - -struct qeth_change_addr { - __u32 cmd; - __u32 addr_size; - __u32 no_macs; - __u8 addr[OSA_ADDR_LEN]; -} __attribute__ ((packed)); - - -struct qeth_snmp_cmd { - __u8 token[16]; - __u32 request; - __u32 interface; - __u32 returncode; - __u32 firmwarelevel; - __u32 seqno; - __u8 data; -} __attribute__ ((packed)); - -struct qeth_snmp_ureq_hdr { - __u32 data_len; - __u32 req_len; - __u32 reserved1; - __u32 reserved2; -} __attribute__ ((packed)); - -struct qeth_snmp_ureq { - struct qeth_snmp_ureq_hdr hdr; - struct qeth_snmp_cmd cmd; -} __attribute__((packed)); - -struct qeth_ipacmd_setadpparms_hdr { - __u32 supp_hw_cmds; - __u32 reserved1; - __u16 cmdlength; - __u16 reserved2; - __u32 command_code; - __u16 return_code; - __u8 used_total; - __u8 seq_no; - __u32 reserved3; -} __attribute__ ((packed)); - -struct qeth_ipacmd_setadpparms { - struct qeth_ipacmd_setadpparms_hdr hdr; - union { - struct qeth_query_cmds_supp query_cmds_supp; - struct qeth_change_addr change_addr; - struct qeth_snmp_cmd snmp; - __u32 mode; - } data; -} __attribute__ ((packed)); - -/* IPFRAME IPA Command: ***************************************************/ -/* TODO: define in analogy to commands define above */ - -/* ADD_ADDR_ENTRY IPA Command: ********************************************/ -/* TODO: define in analogy to commands define above */ - -/* DELETE_ADDR_ENTRY IPA Command: *****************************************/ -/* TODO: define in analogy to commands define above */ - -/* CREATE_ADDR IPA Command: ***********************************************/ -struct qeth_create_destroy_address { - __u8 unique_id[8]; -} __attribute__ ((packed)); - -/* REGISTER_LOCAL_ADDR IPA Command: ***************************************/ -/* TODO: define in analogy to commands define above */ - -/* UNREGISTER_LOCAL_ADDR IPA Command: *************************************/ -/* TODO: define in analogy to commands define above */ - -/* Header for each IPA command */ -struct qeth_ipacmd_hdr { - __u8 command; - __u8 initiator; - __u16 seqno; - __u16 return_code; - __u8 adapter_type; - __u8 rel_adapter_no; - __u8 prim_version_no; - __u8 param_count; - __u16 prot_version; - __u32 ipa_supported; - __u32 ipa_enabled; -} __attribute__ ((packed)); - -/* The IPA command itself */ -struct qeth_ipa_cmd { - struct qeth_ipacmd_hdr hdr; - union { - struct qeth_ipacmd_setdelip4 setdelip4; - struct qeth_ipacmd_setdelip6 setdelip6; - struct qeth_ipacmd_setdelipm setdelipm; - struct qeth_ipacmd_setassparms setassparms; - struct qeth_ipacmd_layer2setdelmac setdelmac; - struct qeth_ipacmd_layer2setdelvlan setdelvlan; - struct qeth_create_destroy_address create_destroy_addr; - struct qeth_ipacmd_setadpparms setadapterparms; - struct qeth_set_routing setrtg; - } data; -} __attribute__ ((packed)); - -/* - * special command for ARP processing. - * this is not included in setassparms command before, because we get - * problem with the size of struct qeth_ipacmd_setassparms otherwise - */ -enum qeth_ipa_arp_return_codes { - QETH_IPA_ARP_RC_SUCCESS = 0x0000, - QETH_IPA_ARP_RC_FAILED = 0x0001, - QETH_IPA_ARP_RC_NOTSUPP = 0x0002, - QETH_IPA_ARP_RC_OUT_OF_RANGE = 0x0003, - QETH_IPA_ARP_RC_Q_NOTSUPP = 0x0004, - QETH_IPA_ARP_RC_Q_NO_DATA = 0x0008, -}; - - -extern char * -qeth_get_ipa_msg(enum qeth_ipa_return_codes rc); -extern char * -qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd); - -#define QETH_SETASS_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \ - sizeof(struct qeth_ipacmd_setassparms_hdr)) -#define QETH_IPA_ARP_DATA_POS(buffer) (buffer + IPA_PDU_HEADER_SIZE + \ - QETH_SETASS_BASE_LEN) -#define QETH_SETADP_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \ - sizeof(struct qeth_ipacmd_setadpparms_hdr)) -#define QETH_SNMP_SETADP_CMDLENGTH 16 - -#define QETH_ARP_DATA_SIZE 3968 -#define QETH_ARP_CMD_LEN (QETH_ARP_DATA_SIZE + 8) -/* Helper functions */ -#define IS_IPA_REPLY(cmd) ((cmd->hdr.initiator == IPA_CMD_INITIATOR_HOST) || \ - (cmd->hdr.initiator == IPA_CMD_INITIATOR_OSA_REPLY)) - -/*****************************************************************************/ -/* END OF IP Assist related definitions */ -/*****************************************************************************/ - - -extern unsigned char WRITE_CCW[]; -extern unsigned char READ_CCW[]; - -extern unsigned char CM_ENABLE[]; -#define CM_ENABLE_SIZE 0x63 -#define QETH_CM_ENABLE_ISSUER_RM_TOKEN(buffer) (buffer+0x2c) -#define QETH_CM_ENABLE_FILTER_TOKEN(buffer) (buffer+0x53) -#define QETH_CM_ENABLE_USER_DATA(buffer) (buffer+0x5b) - -#define QETH_CM_ENABLE_RESP_FILTER_TOKEN(buffer) \ - (PDU_ENCAPSULATION(buffer)+ 0x13) - - -extern unsigned char CM_SETUP[]; -#define CM_SETUP_SIZE 0x64 -#define QETH_CM_SETUP_DEST_ADDR(buffer) (buffer+0x2c) -#define QETH_CM_SETUP_CONNECTION_TOKEN(buffer) (buffer+0x51) -#define QETH_CM_SETUP_FILTER_TOKEN(buffer) (buffer+0x5a) - -#define QETH_CM_SETUP_RESP_DEST_ADDR(buffer) \ - (PDU_ENCAPSULATION(buffer) + 0x1a) - -extern unsigned char ULP_ENABLE[]; -#define ULP_ENABLE_SIZE 0x6b -#define QETH_ULP_ENABLE_LINKNUM(buffer) (buffer+0x61) -#define QETH_ULP_ENABLE_DEST_ADDR(buffer) (buffer+0x2c) -#define QETH_ULP_ENABLE_FILTER_TOKEN(buffer) (buffer+0x53) -#define QETH_ULP_ENABLE_PORTNAME_AND_LL(buffer) (buffer+0x62) -#define QETH_ULP_ENABLE_RESP_FILTER_TOKEN(buffer) \ - (PDU_ENCAPSULATION(buffer) + 0x13) -#define QETH_ULP_ENABLE_RESP_MAX_MTU(buffer) \ - (PDU_ENCAPSULATION(buffer)+ 0x1f) -#define QETH_ULP_ENABLE_RESP_DIFINFO_LEN(buffer) \ - (PDU_ENCAPSULATION(buffer) + 0x17) -#define QETH_ULP_ENABLE_RESP_LINK_TYPE(buffer) \ - (PDU_ENCAPSULATION(buffer)+ 0x2b) -/* Layer 2 defintions */ -#define QETH_PROT_LAYER2 0x08 -#define QETH_PROT_TCPIP 0x03 -#define QETH_PROT_OSN2 0x0a -#define QETH_ULP_ENABLE_PROT_TYPE(buffer) (buffer+0x50) -#define QETH_IPA_CMD_PROT_TYPE(buffer) (buffer+0x19) - -extern unsigned char ULP_SETUP[]; -#define ULP_SETUP_SIZE 0x6c -#define QETH_ULP_SETUP_DEST_ADDR(buffer) (buffer+0x2c) -#define QETH_ULP_SETUP_CONNECTION_TOKEN(buffer) (buffer+0x51) -#define QETH_ULP_SETUP_FILTER_TOKEN(buffer) (buffer+0x5a) -#define QETH_ULP_SETUP_CUA(buffer) (buffer+0x68) -#define QETH_ULP_SETUP_REAL_DEVADDR(buffer) (buffer+0x6a) - -#define QETH_ULP_SETUP_RESP_CONNECTION_TOKEN(buffer) \ - (PDU_ENCAPSULATION(buffer)+0x1a) - - -extern unsigned char DM_ACT[]; -#define DM_ACT_SIZE 0x55 -#define QETH_DM_ACT_DEST_ADDR(buffer) (buffer+0x2c) -#define QETH_DM_ACT_CONNECTION_TOKEN(buffer) (buffer+0x51) - - - -#define QETH_TRANSPORT_HEADER_SEQ_NO(buffer) (buffer+4) -#define QETH_PDU_HEADER_SEQ_NO(buffer) (buffer+0x1c) -#define QETH_PDU_HEADER_ACK_SEQ_NO(buffer) (buffer+0x20) - -extern unsigned char IDX_ACTIVATE_READ[]; -extern unsigned char IDX_ACTIVATE_WRITE[]; - -#define IDX_ACTIVATE_SIZE 0x22 -#define QETH_IDX_ACT_ISSUER_RM_TOKEN(buffer) (buffer+0x0c) -#define QETH_IDX_NO_PORTNAME_REQUIRED(buffer) ((buffer)[0x0b]&0x80) -#define QETH_IDX_ACT_FUNC_LEVEL(buffer) (buffer+0x10) -#define QETH_IDX_ACT_DATASET_NAME(buffer) (buffer+0x16) -#define QETH_IDX_ACT_QDIO_DEV_CUA(buffer) (buffer+0x1e) -#define QETH_IDX_ACT_QDIO_DEV_REALADDR(buffer) (buffer+0x20) -#define QETH_IS_IDX_ACT_POS_REPLY(buffer) (((buffer)[0x08]&3)==2) -#define QETH_IDX_REPLY_LEVEL(buffer) (buffer+0x12) -#define QETH_IDX_ACT_CAUSE_CODE(buffer) (buffer)[0x09] - -#define PDU_ENCAPSULATION(buffer) \ - (buffer + *(buffer + (*(buffer+0x0b)) + \ - *(buffer + *(buffer+0x0b)+0x11) +0x07)) - -#define IS_IPA(buffer) \ - ((buffer) && \ - ( *(buffer + ((*(buffer+0x0b))+4) )==0xc1) ) - -#define ADDR_FRAME_TYPE_DIX 1 -#define ADDR_FRAME_TYPE_802_3 2 -#define ADDR_FRAME_TYPE_TR_WITHOUT_SR 0x10 -#define ADDR_FRAME_TYPE_TR_WITH_SR 0x20 - -#endif diff --git a/drivers/s390/net/qeth_proc.c b/drivers/s390/net/qeth_proc.c deleted file mode 100644 index 46ecd03a597e..000000000000 --- a/drivers/s390/net/qeth_proc.c +++ /dev/null @@ -1,316 +0,0 @@ -/* - * - * linux/drivers/s390/net/qeth_fs.c - * - * Linux on zSeries OSA Express and HiperSockets support - * This file contains code related to procfs. - * - * Copyright 2000,2003 IBM Corporation - * - * Author(s): Thomas Spatzier - * - */ -#include -#include -#include -#include -#include -#include - -#include "qeth.h" -#include "qeth_mpc.h" -#include "qeth_fs.h" - -/***** /proc/qeth *****/ -#define QETH_PROCFILE_NAME "qeth" -static struct proc_dir_entry *qeth_procfile; - -static int -qeth_procfile_seq_match(struct device *dev, void *data) -{ - return(dev ? 1 : 0); -} - -static void * -qeth_procfile_seq_start(struct seq_file *s, loff_t *offset) -{ - struct device *dev = NULL; - loff_t nr = 0; - - if (*offset == 0) - return SEQ_START_TOKEN; - while (1) { - dev = driver_find_device(&qeth_ccwgroup_driver.driver, dev, - NULL, qeth_procfile_seq_match); - if (++nr == *offset) - break; - put_device(dev); - } - return dev; -} - -static void -qeth_procfile_seq_stop(struct seq_file *s, void* it) -{ -} - -static void * -qeth_procfile_seq_next(struct seq_file *s, void *it, loff_t *offset) -{ - struct device *prev, *next; - - if (it == SEQ_START_TOKEN) - prev = NULL; - else - prev = (struct device *) it; - next = driver_find_device(&qeth_ccwgroup_driver.driver, - prev, NULL, qeth_procfile_seq_match); - (*offset)++; - return (void *) next; -} - -static inline const char * -qeth_get_router_str(struct qeth_card *card, int ipv) -{ - enum qeth_routing_types routing_type = NO_ROUTER; - - if (ipv == 4) { - routing_type = card->options.route4.type; - } else { -#ifdef CONFIG_QETH_IPV6 - routing_type = card->options.route6.type; -#else - return "n/a"; -#endif /* CONFIG_QETH_IPV6 */ - } - - switch (routing_type){ - case PRIMARY_ROUTER: - return "pri"; - case SECONDARY_ROUTER: - return "sec"; - case MULTICAST_ROUTER: - if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return "mc+"; - return "mc"; - case PRIMARY_CONNECTOR: - if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return "p+c"; - return "p.c"; - case SECONDARY_CONNECTOR: - if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return "s+c"; - return "s.c"; - default: /* NO_ROUTER */ - return "no"; - } -} - -static int -qeth_procfile_seq_show(struct seq_file *s, void *it) -{ - struct device *device; - struct qeth_card *card; - char tmp[12]; /* for qeth_get_prioq_str */ - - if (it == SEQ_START_TOKEN){ - seq_printf(s, "devices CHPID interface " - "cardtype port chksum prio-q'ing rtr4 " - "rtr6 fsz cnt\n"); - seq_printf(s, "-------------------------- ----- ---------- " - "-------------- ---- ------ ---------- ---- " - "---- ----- -----\n"); - } else { - device = (struct device *) it; - card = device->driver_data; - seq_printf(s, "%s/%s/%s x%02X %-10s %-14s %-4i ", - CARD_RDEV_ID(card), - CARD_WDEV_ID(card), - CARD_DDEV_ID(card), - card->info.chpid, - QETH_CARD_IFNAME(card), - qeth_get_cardname_short(card), - card->info.portno); - if (card->lan_online) - seq_printf(s, "%-6s %-10s %-4s %-4s %-5s %-5i\n", - qeth_get_checksum_str(card), - qeth_get_prioq_str(card, tmp), - qeth_get_router_str(card, 4), - qeth_get_router_str(card, 6), - qeth_get_bufsize_str(card), - card->qdio.in_buf_pool.buf_count); - else - seq_printf(s, " +++ LAN OFFLINE +++\n"); - put_device(device); - } - return 0; -} - -static const struct seq_operations qeth_procfile_seq_ops = { - .start = qeth_procfile_seq_start, - .stop = qeth_procfile_seq_stop, - .next = qeth_procfile_seq_next, - .show = qeth_procfile_seq_show, -}; - -static int -qeth_procfile_open(struct inode *inode, struct file *file) -{ - return seq_open(file, &qeth_procfile_seq_ops); -} - -static const struct file_operations qeth_procfile_fops = { - .owner = THIS_MODULE, - .open = qeth_procfile_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release, -}; - -/***** /proc/qeth_perf *****/ -#define QETH_PERF_PROCFILE_NAME "qeth_perf" -static struct proc_dir_entry *qeth_perf_procfile; - -static int -qeth_perf_procfile_seq_show(struct seq_file *s, void *it) -{ - struct device *device; - struct qeth_card *card; - - - if (it == SEQ_START_TOKEN) - return 0; - - device = (struct device *) it; - card = device->driver_data; - seq_printf(s, "For card with devnos %s/%s/%s (%s):\n", - CARD_RDEV_ID(card), - CARD_WDEV_ID(card), - CARD_DDEV_ID(card), - QETH_CARD_IFNAME(card) - ); - if (!card->options.performance_stats) - seq_printf(s, "Performance statistics are deactivated.\n"); - seq_printf(s, " Skb's/buffers received : %lu/%u\n" - " Skb's/buffers sent : %lu/%u\n\n", - card->stats.rx_packets - - card->perf_stats.initial_rx_packets, - card->perf_stats.bufs_rec, - card->stats.tx_packets - - card->perf_stats.initial_tx_packets, - card->perf_stats.bufs_sent - ); - seq_printf(s, " Skb's/buffers sent without packing : %lu/%u\n" - " Skb's/buffers sent with packing : %u/%u\n\n", - card->stats.tx_packets - card->perf_stats.initial_tx_packets - - card->perf_stats.skbs_sent_pack, - card->perf_stats.bufs_sent - card->perf_stats.bufs_sent_pack, - card->perf_stats.skbs_sent_pack, - card->perf_stats.bufs_sent_pack - ); - seq_printf(s, " Skbs sent in SG mode : %u\n" - " Skb fragments sent in SG mode : %u\n\n", - card->perf_stats.sg_skbs_sent, - card->perf_stats.sg_frags_sent); - seq_printf(s, " Skbs received in SG mode : %u\n" - " Skb fragments received in SG mode : %u\n" - " Page allocations for rx SG mode : %u\n\n", - card->perf_stats.sg_skbs_rx, - card->perf_stats.sg_frags_rx, - card->perf_stats.sg_alloc_page_rx); - seq_printf(s, " large_send tx (in Kbytes) : %u\n" - " large_send count : %u\n\n", - card->perf_stats.large_send_bytes >> 10, - card->perf_stats.large_send_cnt); - seq_printf(s, " Packing state changes no pkg.->packing : %u/%u\n" - " Watermarks L/H : %i/%i\n" - " Current buffer usage (outbound q's) : " - "%i/%i/%i/%i\n\n", - card->perf_stats.sc_dp_p, card->perf_stats.sc_p_dp, - QETH_LOW_WATERMARK_PACK, QETH_HIGH_WATERMARK_PACK, - atomic_read(&card->qdio.out_qs[0]->used_buffers), - (card->qdio.no_out_queues > 1)? - atomic_read(&card->qdio.out_qs[1]->used_buffers) - : 0, - (card->qdio.no_out_queues > 2)? - atomic_read(&card->qdio.out_qs[2]->used_buffers) - : 0, - (card->qdio.no_out_queues > 3)? - atomic_read(&card->qdio.out_qs[3]->used_buffers) - : 0 - ); - seq_printf(s, " Inbound handler time (in us) : %u\n" - " Inbound handler count : %u\n" - " Inbound do_QDIO time (in us) : %u\n" - " Inbound do_QDIO count : %u\n\n" - " Outbound handler time (in us) : %u\n" - " Outbound handler count : %u\n\n" - " Outbound time (in us, incl QDIO) : %u\n" - " Outbound count : %u\n" - " Outbound do_QDIO time (in us) : %u\n" - " Outbound do_QDIO count : %u\n\n", - card->perf_stats.inbound_time, - card->perf_stats.inbound_cnt, - card->perf_stats.inbound_do_qdio_time, - card->perf_stats.inbound_do_qdio_cnt, - card->perf_stats.outbound_handler_time, - card->perf_stats.outbound_handler_cnt, - card->perf_stats.outbound_time, - card->perf_stats.outbound_cnt, - card->perf_stats.outbound_do_qdio_time, - card->perf_stats.outbound_do_qdio_cnt - ); - put_device(device); - return 0; -} - -static const struct seq_operations qeth_perf_procfile_seq_ops = { - .start = qeth_procfile_seq_start, - .stop = qeth_procfile_seq_stop, - .next = qeth_procfile_seq_next, - .show = qeth_perf_procfile_seq_show, -}; - -static int -qeth_perf_procfile_open(struct inode *inode, struct file *file) -{ - return seq_open(file, &qeth_perf_procfile_seq_ops); -} - -static const struct file_operations qeth_perf_procfile_fops = { - .owner = THIS_MODULE, - .open = qeth_perf_procfile_open, - .read = seq_read, - .llseek = seq_lseek, - .release = seq_release, -}; - -int __init -qeth_create_procfs_entries(void) -{ - qeth_procfile = create_proc_entry(QETH_PROCFILE_NAME, - S_IFREG | 0444, NULL); - if (qeth_procfile) - qeth_procfile->proc_fops = &qeth_procfile_fops; - - qeth_perf_procfile = create_proc_entry(QETH_PERF_PROCFILE_NAME, - S_IFREG | 0444, NULL); - if (qeth_perf_procfile) - qeth_perf_procfile->proc_fops = &qeth_perf_procfile_fops; - - if (qeth_procfile && - qeth_perf_procfile) - return 0; - else - return -ENOMEM; -} - -void __exit -qeth_remove_procfs_entries(void) -{ - if (qeth_procfile) - remove_proc_entry(QETH_PROCFILE_NAME, NULL); - if (qeth_perf_procfile) - remove_proc_entry(QETH_PERF_PROCFILE_NAME, NULL); -} - diff --git a/drivers/s390/net/qeth_sys.c b/drivers/s390/net/qeth_sys.c deleted file mode 100644 index 2cc3f3a0e393..000000000000 --- a/drivers/s390/net/qeth_sys.c +++ /dev/null @@ -1,1858 +0,0 @@ -/* - * - * linux/drivers/s390/net/qeth_sys.c - * - * Linux on zSeries OSA Express and HiperSockets support - * This file contains code related to sysfs. - * - * Copyright 2000,2003 IBM Corporation - * - * Author(s): Thomas Spatzier - * Frank Pavlic - * - */ -#include -#include - -#include - -#include "qeth.h" -#include "qeth_mpc.h" -#include "qeth_fs.h" - -/*****************************************************************************/ -/* */ -/* /sys-fs stuff UNDER DEVELOPMENT !!! */ -/* */ -/*****************************************************************************/ -//low/high watermark - -static ssize_t -qeth_dev_state_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - if (!card) - return -EINVAL; - - switch (card->state) { - case CARD_STATE_DOWN: - return sprintf(buf, "DOWN\n"); - case CARD_STATE_HARDSETUP: - return sprintf(buf, "HARDSETUP\n"); - case CARD_STATE_SOFTSETUP: - return sprintf(buf, "SOFTSETUP\n"); - case CARD_STATE_UP: - if (card->lan_online) - return sprintf(buf, "UP (LAN ONLINE)\n"); - else - return sprintf(buf, "UP (LAN OFFLINE)\n"); - case CARD_STATE_RECOVER: - return sprintf(buf, "RECOVER\n"); - default: - return sprintf(buf, "UNKNOWN\n"); - } -} - -static DEVICE_ATTR(state, 0444, qeth_dev_state_show, NULL); - -static ssize_t -qeth_dev_chpid_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - if (!card) - return -EINVAL; - - return sprintf(buf, "%02X\n", card->info.chpid); -} - -static DEVICE_ATTR(chpid, 0444, qeth_dev_chpid_show, NULL); - -static ssize_t -qeth_dev_if_name_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - if (!card) - return -EINVAL; - return sprintf(buf, "%s\n", QETH_CARD_IFNAME(card)); -} - -static DEVICE_ATTR(if_name, 0444, qeth_dev_if_name_show, NULL); - -static ssize_t -qeth_dev_card_type_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - if (!card) - return -EINVAL; - - return sprintf(buf, "%s\n", qeth_get_cardname_short(card)); -} - -static DEVICE_ATTR(card_type, 0444, qeth_dev_card_type_show, NULL); - -static ssize_t -qeth_dev_portno_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->info.portno); -} - -static ssize_t -qeth_dev_portno_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - unsigned int portno; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - portno = simple_strtoul(buf, &tmp, 16); - if (portno > MAX_PORTNO){ - PRINT_WARN("portno 0x%X is out of range\n", portno); - return -EINVAL; - } - - card->info.portno = portno; - return count; -} - -static DEVICE_ATTR(portno, 0644, qeth_dev_portno_show, qeth_dev_portno_store); - -static ssize_t -qeth_dev_portname_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - char portname[9] = {0, }; - - if (!card) - return -EINVAL; - - if (card->info.portname_required) { - memcpy(portname, card->info.portname + 1, 8); - EBCASC(portname, 8); - return sprintf(buf, "%s\n", portname); - } else - return sprintf(buf, "no portname required\n"); -} - -static ssize_t -qeth_dev_portname_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - tmp = strsep((char **) &buf, "\n"); - if ((strlen(tmp) > 8) || (strlen(tmp) == 0)) - return -EINVAL; - - card->info.portname[0] = strlen(tmp); - /* for beauty reasons */ - for (i = 1; i < 9; i++) - card->info.portname[i] = ' '; - strcpy(card->info.portname + 1, tmp); - ASCEBC(card->info.portname + 1, 8); - - return count; -} - -static DEVICE_ATTR(portname, 0644, qeth_dev_portname_show, - qeth_dev_portname_store); - -static ssize_t -qeth_dev_checksum_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%s checksumming\n", qeth_get_checksum_str(card)); -} - -static ssize_t -qeth_dev_checksum_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - tmp = strsep((char **) &buf, "\n"); - if (!strcmp(tmp, "sw_checksumming")) - card->options.checksum_type = SW_CHECKSUMMING; - else if (!strcmp(tmp, "hw_checksumming")) - card->options.checksum_type = HW_CHECKSUMMING; - else if (!strcmp(tmp, "no_checksumming")) - card->options.checksum_type = NO_CHECKSUMMING; - else { - PRINT_WARN("Unknown checksumming type '%s'\n", tmp); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(checksumming, 0644, qeth_dev_checksum_show, - qeth_dev_checksum_store); - -static ssize_t -qeth_dev_prioqing_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - switch (card->qdio.do_prio_queueing) { - case QETH_PRIO_Q_ING_PREC: - return sprintf(buf, "%s\n", "by precedence"); - case QETH_PRIO_Q_ING_TOS: - return sprintf(buf, "%s\n", "by type of service"); - default: - return sprintf(buf, "always queue %i\n", - card->qdio.default_out_queue); - } -} - -static ssize_t -qeth_dev_prioqing_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - /* check if 1920 devices are supported , - * if though we have to permit priority queueing - */ - if (card->qdio.no_out_queues == 1) { - PRINT_WARN("Priority queueing disabled due " - "to hardware limitations!\n"); - card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT; - return -EPERM; - } - - tmp = strsep((char **) &buf, "\n"); - if (!strcmp(tmp, "prio_queueing_prec")) - card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_PREC; - else if (!strcmp(tmp, "prio_queueing_tos")) - card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_TOS; - else if (!strcmp(tmp, "no_prio_queueing:0")) { - card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; - card->qdio.default_out_queue = 0; - } else if (!strcmp(tmp, "no_prio_queueing:1")) { - card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; - card->qdio.default_out_queue = 1; - } else if (!strcmp(tmp, "no_prio_queueing:2")) { - card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; - card->qdio.default_out_queue = 2; - } else if (!strcmp(tmp, "no_prio_queueing:3")) { - card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; - card->qdio.default_out_queue = 3; - } else if (!strcmp(tmp, "no_prio_queueing")) { - card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING; - card->qdio.default_out_queue = QETH_DEFAULT_QUEUE; - } else { - PRINT_WARN("Unknown queueing type '%s'\n", tmp); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(priority_queueing, 0644, qeth_dev_prioqing_show, - qeth_dev_prioqing_store); - -static ssize_t -qeth_dev_bufcnt_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->qdio.in_buf_pool.buf_count); -} - -static ssize_t -qeth_dev_bufcnt_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int cnt, old_cnt; - int rc; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - old_cnt = card->qdio.in_buf_pool.buf_count; - cnt = simple_strtoul(buf, &tmp, 10); - cnt = (cnt < QETH_IN_BUF_COUNT_MIN) ? QETH_IN_BUF_COUNT_MIN : - ((cnt > QETH_IN_BUF_COUNT_MAX) ? QETH_IN_BUF_COUNT_MAX : cnt); - if (old_cnt != cnt) { - if ((rc = qeth_realloc_buffer_pool(card, cnt))) - PRINT_WARN("Error (%d) while setting " - "buffer count.\n", rc); - } - return count; -} - -static DEVICE_ATTR(buffer_count, 0644, qeth_dev_bufcnt_show, - qeth_dev_bufcnt_store); - -static ssize_t -qeth_dev_route_show(struct qeth_card *card, struct qeth_routing_info *route, - char *buf) -{ - switch (route->type) { - case PRIMARY_ROUTER: - return sprintf(buf, "%s\n", "primary router"); - case SECONDARY_ROUTER: - return sprintf(buf, "%s\n", "secondary router"); - case MULTICAST_ROUTER: - if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return sprintf(buf, "%s\n", "multicast router+"); - else - return sprintf(buf, "%s\n", "multicast router"); - case PRIMARY_CONNECTOR: - if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return sprintf(buf, "%s\n", "primary connector+"); - else - return sprintf(buf, "%s\n", "primary connector"); - case SECONDARY_CONNECTOR: - if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return sprintf(buf, "%s\n", "secondary connector+"); - else - return sprintf(buf, "%s\n", "secondary connector"); - default: - return sprintf(buf, "%s\n", "no"); - } -} - -static ssize_t -qeth_dev_route4_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_route_show(card, &card->options.route4, buf); -} - -static ssize_t -qeth_dev_route_store(struct qeth_card *card, struct qeth_routing_info *route, - enum qeth_prot_versions prot, const char *buf, size_t count) -{ - enum qeth_routing_types old_route_type = route->type; - char *tmp; - int rc; - - tmp = strsep((char **) &buf, "\n"); - - if (!strcmp(tmp, "no_router")){ - route->type = NO_ROUTER; - } else if (!strcmp(tmp, "primary_connector")) { - route->type = PRIMARY_CONNECTOR; - } else if (!strcmp(tmp, "secondary_connector")) { - route->type = SECONDARY_CONNECTOR; - } else if (!strcmp(tmp, "primary_router")) { - route->type = PRIMARY_ROUTER; - } else if (!strcmp(tmp, "secondary_router")) { - route->type = SECONDARY_ROUTER; - } else if (!strcmp(tmp, "multicast_router")) { - route->type = MULTICAST_ROUTER; - } else { - PRINT_WARN("Invalid routing type '%s'.\n", tmp); - return -EINVAL; - } - if (((card->state == CARD_STATE_SOFTSETUP) || - (card->state == CARD_STATE_UP)) && - (old_route_type != route->type)){ - if (prot == QETH_PROT_IPV4) - rc = qeth_setrouting_v4(card); - else if (prot == QETH_PROT_IPV6) - rc = qeth_setrouting_v6(card); - } - return count; -} - -static ssize_t -qeth_dev_route4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_route_store(card, &card->options.route4, - QETH_PROT_IPV4, buf, count); -} - -static DEVICE_ATTR(route4, 0644, qeth_dev_route4_show, qeth_dev_route4_store); - -#ifdef CONFIG_QETH_IPV6 -static ssize_t -qeth_dev_route6_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (!qeth_is_supported(card, IPA_IPV6)) - return sprintf(buf, "%s\n", "n/a"); - - return qeth_dev_route_show(card, &card->options.route6, buf); -} - -static ssize_t -qeth_dev_route6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (!qeth_is_supported(card, IPA_IPV6)){ - PRINT_WARN("IPv6 not supported for interface %s.\n" - "Routing status no changed.\n", - QETH_CARD_IFNAME(card)); - return -ENOTSUPP; - } - - return qeth_dev_route_store(card, &card->options.route6, - QETH_PROT_IPV6, buf, count); -} - -static DEVICE_ATTR(route6, 0644, qeth_dev_route6_show, qeth_dev_route6_store); -#endif - -static ssize_t -qeth_dev_add_hhlen_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->options.add_hhlen); -} - -static ssize_t -qeth_dev_add_hhlen_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - i = simple_strtoul(buf, &tmp, 10); - if ((i < 0) || (i > MAX_ADD_HHLEN)) { - PRINT_WARN("add_hhlen out of range\n"); - return -EINVAL; - } - card->options.add_hhlen = i; - - return count; -} - -static DEVICE_ATTR(add_hhlen, 0644, qeth_dev_add_hhlen_show, - qeth_dev_add_hhlen_store); - -static ssize_t -qeth_dev_fake_ll_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->options.fake_ll? 1:0); -} - -static ssize_t -qeth_dev_fake_ll_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - i = simple_strtoul(buf, &tmp, 16); - if ((i != 0) && (i != 1)) { - PRINT_WARN("fake_ll: write 0 or 1 to this file!\n"); - return -EINVAL; - } - card->options.fake_ll = i; - return count; -} - -static DEVICE_ATTR(fake_ll, 0644, qeth_dev_fake_ll_show, - qeth_dev_fake_ll_store); - -static ssize_t -qeth_dev_fake_broadcast_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->options.fake_broadcast? 1:0); -} - -static ssize_t -qeth_dev_fake_broadcast_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - i = simple_strtoul(buf, &tmp, 16); - if ((i == 0) || (i == 1)) - card->options.fake_broadcast = i; - else { - PRINT_WARN("fake_broadcast: write 0 or 1 to this file!\n"); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(fake_broadcast, 0644, qeth_dev_fake_broadcast_show, - qeth_dev_fake_broadcast_store); - -static ssize_t -qeth_dev_recover_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if (card->state != CARD_STATE_UP) - return -EPERM; - - i = simple_strtoul(buf, &tmp, 16); - if (i == 1) - qeth_schedule_recovery(card); - - return count; -} - -static DEVICE_ATTR(recover, 0200, NULL, qeth_dev_recover_store); - -static ssize_t -qeth_dev_broadcast_mode_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || - (card->info.link_type == QETH_LINK_TYPE_LANE_TR))) - return sprintf(buf, "n/a\n"); - - return sprintf(buf, "%s\n", (card->options.broadcast_mode == - QETH_TR_BROADCAST_ALLRINGS)? - "all rings":"local"); -} - -static ssize_t -qeth_dev_broadcast_mode_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || - (card->info.link_type == QETH_LINK_TYPE_LANE_TR))){ - PRINT_WARN("Device is not a tokenring device!\n"); - return -EINVAL; - } - - tmp = strsep((char **) &buf, "\n"); - - if (!strcmp(tmp, "local")){ - card->options.broadcast_mode = QETH_TR_BROADCAST_LOCAL; - return count; - } else if (!strcmp(tmp, "all_rings")) { - card->options.broadcast_mode = QETH_TR_BROADCAST_ALLRINGS; - return count; - } else { - PRINT_WARN("broadcast_mode: invalid mode %s!\n", - tmp); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(broadcast_mode, 0644, qeth_dev_broadcast_mode_show, - qeth_dev_broadcast_mode_store); - -static ssize_t -qeth_dev_canonical_macaddr_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || - (card->info.link_type == QETH_LINK_TYPE_LANE_TR))) - return sprintf(buf, "n/a\n"); - - return sprintf(buf, "%i\n", (card->options.macaddr_mode == - QETH_TR_MACADDR_CANONICAL)? 1:0); -} - -static ssize_t -qeth_dev_canonical_macaddr_store(struct device *dev, struct device_attribute *attr, const char *buf, - size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - if (!((card->info.link_type == QETH_LINK_TYPE_HSTR) || - (card->info.link_type == QETH_LINK_TYPE_LANE_TR))){ - PRINT_WARN("Device is not a tokenring device!\n"); - return -EINVAL; - } - - i = simple_strtoul(buf, &tmp, 16); - if ((i == 0) || (i == 1)) - card->options.macaddr_mode = i? - QETH_TR_MACADDR_CANONICAL : - QETH_TR_MACADDR_NONCANONICAL; - else { - PRINT_WARN("canonical_macaddr: write 0 or 1 to this file!\n"); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(canonical_macaddr, 0644, qeth_dev_canonical_macaddr_show, - qeth_dev_canonical_macaddr_store); - -static ssize_t -qeth_dev_layer2_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->options.layer2 ? 1:0); -} - -static ssize_t -qeth_dev_layer2_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - if (card->info.type == QETH_CARD_TYPE_IQD) { - PRINT_WARN("Layer2 on Hipersockets is not supported! \n"); - return -EPERM; - } - - if (((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER))) - return -EPERM; - - i = simple_strtoul(buf, &tmp, 16); - if ((i == 0) || (i == 1)) - card->options.layer2 = i; - else { - PRINT_WARN("layer2: write 0 or 1 to this file!\n"); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(layer2, 0644, qeth_dev_layer2_show, - qeth_dev_layer2_store); - -static ssize_t -qeth_dev_performance_stats_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", card->options.performance_stats ? 1:0); -} - -static ssize_t -qeth_dev_performance_stats_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - int i; - - if (!card) - return -EINVAL; - - i = simple_strtoul(buf, &tmp, 16); - if ((i == 0) || (i == 1)) { - if (i == card->options.performance_stats) - return count; - card->options.performance_stats = i; - if (i == 0) - memset(&card->perf_stats, 0, - sizeof(struct qeth_perf_stats)); - card->perf_stats.initial_rx_packets = card->stats.rx_packets; - card->perf_stats.initial_tx_packets = card->stats.tx_packets; - } else { - PRINT_WARN("performance_stats: write 0 or 1 to this file!\n"); - return -EINVAL; - } - return count; -} - -static DEVICE_ATTR(performance_stats, 0644, qeth_dev_performance_stats_show, - qeth_dev_performance_stats_store); - -static ssize_t -qeth_dev_large_send_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - switch (card->options.large_send) { - case QETH_LARGE_SEND_NO: - return sprintf(buf, "%s\n", "no"); - case QETH_LARGE_SEND_EDDP: - return sprintf(buf, "%s\n", "EDDP"); - case QETH_LARGE_SEND_TSO: - return sprintf(buf, "%s\n", "TSO"); - default: - return sprintf(buf, "%s\n", "N/A"); - } -} - -static ssize_t -qeth_dev_large_send_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - enum qeth_large_send_types type; - int rc = 0; - char *tmp; - - if (!card) - return -EINVAL; - tmp = strsep((char **) &buf, "\n"); - if (!strcmp(tmp, "no")){ - type = QETH_LARGE_SEND_NO; - } else if (!strcmp(tmp, "EDDP")) { - type = QETH_LARGE_SEND_EDDP; - } else if (!strcmp(tmp, "TSO")) { - type = QETH_LARGE_SEND_TSO; - } else { - PRINT_WARN("large_send: invalid mode %s!\n", tmp); - return -EINVAL; - } - if (card->options.large_send == type) - return count; - if ((rc = qeth_set_large_send(card, type))) - return rc; - return count; -} - -static DEVICE_ATTR(large_send, 0644, qeth_dev_large_send_show, - qeth_dev_large_send_store); - -static ssize_t -qeth_dev_blkt_show(char *buf, struct qeth_card *card, int value ) -{ - - if (!card) - return -EINVAL; - - return sprintf(buf, "%i\n", value); -} - -static ssize_t -qeth_dev_blkt_store(struct qeth_card *card, const char *buf, size_t count, - int *value, int max_value) -{ - char *tmp; - int i; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - i = simple_strtoul(buf, &tmp, 10); - if (i <= max_value) { - *value = i; - } else { - PRINT_WARN("blkt total time: write values between" - " 0 and %d to this file!\n", max_value); - return -EINVAL; - } - return count; -} - -static ssize_t -qeth_dev_blkt_total_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - return qeth_dev_blkt_show(buf, card, card->info.blkt.time_total); -} - - -static ssize_t -qeth_dev_blkt_total_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - return qeth_dev_blkt_store(card, buf, count, - &card->info.blkt.time_total,1000); -} - - - -static DEVICE_ATTR(total, 0644, qeth_dev_blkt_total_show, - qeth_dev_blkt_total_store); - -static ssize_t -qeth_dev_blkt_inter_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - return qeth_dev_blkt_show(buf, card, card->info.blkt.inter_packet); -} - - -static ssize_t -qeth_dev_blkt_inter_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - return qeth_dev_blkt_store(card, buf, count, - &card->info.blkt.inter_packet,100); -} - -static DEVICE_ATTR(inter, 0644, qeth_dev_blkt_inter_show, - qeth_dev_blkt_inter_store); - -static ssize_t -qeth_dev_blkt_inter_jumbo_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - return qeth_dev_blkt_show(buf, card, - card->info.blkt.inter_packet_jumbo); -} - - -static ssize_t -qeth_dev_blkt_inter_jumbo_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - return qeth_dev_blkt_store(card, buf, count, - &card->info.blkt.inter_packet_jumbo,100); -} - -static DEVICE_ATTR(inter_jumbo, 0644, qeth_dev_blkt_inter_jumbo_show, - qeth_dev_blkt_inter_jumbo_store); - -static struct device_attribute * qeth_blkt_device_attrs[] = { - &dev_attr_total, - &dev_attr_inter, - &dev_attr_inter_jumbo, - NULL, -}; - -static struct attribute_group qeth_device_blkt_group = { - .name = "blkt", - .attrs = (struct attribute **)qeth_blkt_device_attrs, -}; - -static struct device_attribute * qeth_device_attrs[] = { - &dev_attr_state, - &dev_attr_chpid, - &dev_attr_if_name, - &dev_attr_card_type, - &dev_attr_portno, - &dev_attr_portname, - &dev_attr_checksumming, - &dev_attr_priority_queueing, - &dev_attr_buffer_count, - &dev_attr_route4, -#ifdef CONFIG_QETH_IPV6 - &dev_attr_route6, -#endif - &dev_attr_add_hhlen, - &dev_attr_fake_ll, - &dev_attr_fake_broadcast, - &dev_attr_recover, - &dev_attr_broadcast_mode, - &dev_attr_canonical_macaddr, - &dev_attr_layer2, - &dev_attr_large_send, - &dev_attr_performance_stats, - NULL, -}; - -static struct attribute_group qeth_device_attr_group = { - .attrs = (struct attribute **)qeth_device_attrs, -}; - -static struct device_attribute * qeth_osn_device_attrs[] = { - &dev_attr_state, - &dev_attr_chpid, - &dev_attr_if_name, - &dev_attr_card_type, - &dev_attr_buffer_count, - &dev_attr_recover, - NULL, -}; - -static struct attribute_group qeth_osn_device_attr_group = { - .attrs = (struct attribute **)qeth_osn_device_attrs, -}; - -#define QETH_DEVICE_ATTR(_id,_name,_mode,_show,_store) \ -struct device_attribute dev_attr_##_id = { \ - .attr = {.name=__stringify(_name), .mode=_mode, },\ - .show = _show, \ - .store = _store, \ -}; - -static int -qeth_check_layer2(struct qeth_card *card) -{ - if (card->options.layer2) - return -EPERM; - return 0; -} - - -static ssize_t -qeth_dev_ipato_enable_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (qeth_check_layer2(card)) - return -EPERM; - return sprintf(buf, "%i\n", card->ipato.enabled? 1:0); -} - -static ssize_t -qeth_dev_ipato_enable_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - - if (!card) - return -EINVAL; - - if ((card->state != CARD_STATE_DOWN) && - (card->state != CARD_STATE_RECOVER)) - return -EPERM; - - if (qeth_check_layer2(card)) - return -EPERM; - - tmp = strsep((char **) &buf, "\n"); - if (!strcmp(tmp, "toggle")){ - card->ipato.enabled = (card->ipato.enabled)? 0 : 1; - } else if (!strcmp(tmp, "1")){ - card->ipato.enabled = 1; - } else if (!strcmp(tmp, "0")){ - card->ipato.enabled = 0; - } else { - PRINT_WARN("ipato_enable: write 0, 1 or 'toggle' to " - "this file\n"); - return -EINVAL; - } - return count; -} - -static QETH_DEVICE_ATTR(ipato_enable, enable, 0644, - qeth_dev_ipato_enable_show, - qeth_dev_ipato_enable_store); - -static ssize_t -qeth_dev_ipato_invert4_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (qeth_check_layer2(card)) - return -EPERM; - - return sprintf(buf, "%i\n", card->ipato.invert4? 1:0); -} - -static ssize_t -qeth_dev_ipato_invert4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - - if (!card) - return -EINVAL; - - if (qeth_check_layer2(card)) - return -EPERM; - - tmp = strsep((char **) &buf, "\n"); - if (!strcmp(tmp, "toggle")){ - card->ipato.invert4 = (card->ipato.invert4)? 0 : 1; - } else if (!strcmp(tmp, "1")){ - card->ipato.invert4 = 1; - } else if (!strcmp(tmp, "0")){ - card->ipato.invert4 = 0; - } else { - PRINT_WARN("ipato_invert4: write 0, 1 or 'toggle' to " - "this file\n"); - return -EINVAL; - } - return count; -} - -static QETH_DEVICE_ATTR(ipato_invert4, invert4, 0644, - qeth_dev_ipato_invert4_show, - qeth_dev_ipato_invert4_store); - -static ssize_t -qeth_dev_ipato_add_show(char *buf, struct qeth_card *card, - enum qeth_prot_versions proto) -{ - struct qeth_ipato_entry *ipatoe; - unsigned long flags; - char addr_str[40]; - int entry_len; /* length of 1 entry string, differs between v4 and v6 */ - int i = 0; - - if (qeth_check_layer2(card)) - return -EPERM; - - entry_len = (proto == QETH_PROT_IPV4)? 12 : 40; - /* add strlen for "/\n" */ - entry_len += (proto == QETH_PROT_IPV4)? 5 : 6; - spin_lock_irqsave(&card->ip_lock, flags); - list_for_each_entry(ipatoe, &card->ipato.entries, entry){ - if (ipatoe->proto != proto) - continue; - /* String must not be longer than PAGE_SIZE. So we check if - * string length gets near PAGE_SIZE. Then we can savely display - * the next IPv6 address (worst case, compared to IPv4) */ - if ((PAGE_SIZE - i) <= entry_len) - break; - qeth_ipaddr_to_string(proto, ipatoe->addr, addr_str); - i += snprintf(buf + i, PAGE_SIZE - i, - "%s/%i\n", addr_str, ipatoe->mask_bits); - } - spin_unlock_irqrestore(&card->ip_lock, flags); - i += snprintf(buf + i, PAGE_SIZE - i, "\n"); - - return i; -} - -static ssize_t -qeth_dev_ipato_add4_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_ipato_add_show(buf, card, QETH_PROT_IPV4); -} - -static int -qeth_parse_ipatoe(const char* buf, enum qeth_prot_versions proto, - u8 *addr, int *mask_bits) -{ - const char *start, *end; - char *tmp; - char buffer[40] = {0, }; - - start = buf; - /* get address string */ - end = strchr(start, '/'); - if (!end || (end - start >= 40)){ - PRINT_WARN("Invalid format for ipato_addx/delx. " - "Use /\n"); - return -EINVAL; - } - strncpy(buffer, start, end - start); - if (qeth_string_to_ipaddr(buffer, proto, addr)){ - PRINT_WARN("Invalid IP address format!\n"); - return -EINVAL; - } - start = end + 1; - *mask_bits = simple_strtoul(start, &tmp, 10); - if (!strlen(start) || - (tmp == start) || - (*mask_bits > ((proto == QETH_PROT_IPV4) ? 32 : 128))) { - PRINT_WARN("Invalid mask bits for ipato_addx/delx !\n"); - return -EINVAL; - } - return 0; -} - -static ssize_t -qeth_dev_ipato_add_store(const char *buf, size_t count, - struct qeth_card *card, enum qeth_prot_versions proto) -{ - struct qeth_ipato_entry *ipatoe; - u8 addr[16]; - int mask_bits; - int rc; - - if (qeth_check_layer2(card)) - return -EPERM; - if ((rc = qeth_parse_ipatoe(buf, proto, addr, &mask_bits))) - return rc; - - if (!(ipatoe = kzalloc(sizeof(struct qeth_ipato_entry), GFP_KERNEL))){ - PRINT_WARN("No memory to allocate ipato entry\n"); - return -ENOMEM; - } - ipatoe->proto = proto; - memcpy(ipatoe->addr, addr, (proto == QETH_PROT_IPV4)? 4:16); - ipatoe->mask_bits = mask_bits; - - if ((rc = qeth_add_ipato_entry(card, ipatoe))){ - kfree(ipatoe); - return rc; - } - - return count; -} - -static ssize_t -qeth_dev_ipato_add4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_ipato_add_store(buf, count, card, QETH_PROT_IPV4); -} - -static QETH_DEVICE_ATTR(ipato_add4, add4, 0644, - qeth_dev_ipato_add4_show, - qeth_dev_ipato_add4_store); - -static ssize_t -qeth_dev_ipato_del_store(const char *buf, size_t count, - struct qeth_card *card, enum qeth_prot_versions proto) -{ - u8 addr[16]; - int mask_bits; - int rc; - - if (qeth_check_layer2(card)) - return -EPERM; - if ((rc = qeth_parse_ipatoe(buf, proto, addr, &mask_bits))) - return rc; - - qeth_del_ipato_entry(card, proto, addr, mask_bits); - - return count; -} - -static ssize_t -qeth_dev_ipato_del4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_ipato_del_store(buf, count, card, QETH_PROT_IPV4); -} - -static QETH_DEVICE_ATTR(ipato_del4, del4, 0200, NULL, - qeth_dev_ipato_del4_store); - -#ifdef CONFIG_QETH_IPV6 -static ssize_t -qeth_dev_ipato_invert6_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (qeth_check_layer2(card)) - return -EPERM; - - return sprintf(buf, "%i\n", card->ipato.invert6? 1:0); -} - -static ssize_t -qeth_dev_ipato_invert6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - char *tmp; - - if (!card) - return -EINVAL; - - if (qeth_check_layer2(card)) - return -EPERM; - - tmp = strsep((char **) &buf, "\n"); - if (!strcmp(tmp, "toggle")){ - card->ipato.invert6 = (card->ipato.invert6)? 0 : 1; - } else if (!strcmp(tmp, "1")){ - card->ipato.invert6 = 1; - } else if (!strcmp(tmp, "0")){ - card->ipato.invert6 = 0; - } else { - PRINT_WARN("ipato_invert6: write 0, 1 or 'toggle' to " - "this file\n"); - return -EINVAL; - } - return count; -} - -static QETH_DEVICE_ATTR(ipato_invert6, invert6, 0644, - qeth_dev_ipato_invert6_show, - qeth_dev_ipato_invert6_store); - - -static ssize_t -qeth_dev_ipato_add6_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_ipato_add_show(buf, card, QETH_PROT_IPV6); -} - -static ssize_t -qeth_dev_ipato_add6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_ipato_add_store(buf, count, card, QETH_PROT_IPV6); -} - -static QETH_DEVICE_ATTR(ipato_add6, add6, 0644, - qeth_dev_ipato_add6_show, - qeth_dev_ipato_add6_store); - -static ssize_t -qeth_dev_ipato_del6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_ipato_del_store(buf, count, card, QETH_PROT_IPV6); -} - -static QETH_DEVICE_ATTR(ipato_del6, del6, 0200, NULL, - qeth_dev_ipato_del6_store); -#endif /* CONFIG_QETH_IPV6 */ - -static struct device_attribute * qeth_ipato_device_attrs[] = { - &dev_attr_ipato_enable, - &dev_attr_ipato_invert4, - &dev_attr_ipato_add4, - &dev_attr_ipato_del4, -#ifdef CONFIG_QETH_IPV6 - &dev_attr_ipato_invert6, - &dev_attr_ipato_add6, - &dev_attr_ipato_del6, -#endif - NULL, -}; - -static struct attribute_group qeth_device_ipato_group = { - .name = "ipa_takeover", - .attrs = (struct attribute **)qeth_ipato_device_attrs, -}; - -static ssize_t -qeth_dev_vipa_add_show(char *buf, struct qeth_card *card, - enum qeth_prot_versions proto) -{ - struct qeth_ipaddr *ipaddr; - char addr_str[40]; - int entry_len; /* length of 1 entry string, differs between v4 and v6 */ - unsigned long flags; - int i = 0; - - if (qeth_check_layer2(card)) - return -EPERM; - - entry_len = (proto == QETH_PROT_IPV4)? 12 : 40; - entry_len += 2; /* \n + terminator */ - spin_lock_irqsave(&card->ip_lock, flags); - list_for_each_entry(ipaddr, &card->ip_list, entry){ - if (ipaddr->proto != proto) - continue; - if (ipaddr->type != QETH_IP_TYPE_VIPA) - continue; - /* String must not be longer than PAGE_SIZE. So we check if - * string length gets near PAGE_SIZE. Then we can savely display - * the next IPv6 address (worst case, compared to IPv4) */ - if ((PAGE_SIZE - i) <= entry_len) - break; - qeth_ipaddr_to_string(proto, (const u8 *)&ipaddr->u, addr_str); - i += snprintf(buf + i, PAGE_SIZE - i, "%s\n", addr_str); - } - spin_unlock_irqrestore(&card->ip_lock, flags); - i += snprintf(buf + i, PAGE_SIZE - i, "\n"); - - return i; -} - -static ssize_t -qeth_dev_vipa_add4_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_vipa_add_show(buf, card, QETH_PROT_IPV4); -} - -static int -qeth_parse_vipae(const char* buf, enum qeth_prot_versions proto, - u8 *addr) -{ - if (qeth_string_to_ipaddr(buf, proto, addr)){ - PRINT_WARN("Invalid IP address format!\n"); - return -EINVAL; - } - return 0; -} - -static ssize_t -qeth_dev_vipa_add_store(const char *buf, size_t count, - struct qeth_card *card, enum qeth_prot_versions proto) -{ - u8 addr[16] = {0, }; - int rc; - - if (qeth_check_layer2(card)) - return -EPERM; - if ((rc = qeth_parse_vipae(buf, proto, addr))) - return rc; - - if ((rc = qeth_add_vipa(card, proto, addr))) - return rc; - - return count; -} - -static ssize_t -qeth_dev_vipa_add4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_vipa_add_store(buf, count, card, QETH_PROT_IPV4); -} - -static QETH_DEVICE_ATTR(vipa_add4, add4, 0644, - qeth_dev_vipa_add4_show, - qeth_dev_vipa_add4_store); - -static ssize_t -qeth_dev_vipa_del_store(const char *buf, size_t count, - struct qeth_card *card, enum qeth_prot_versions proto) -{ - u8 addr[16]; - int rc; - - if (qeth_check_layer2(card)) - return -EPERM; - if ((rc = qeth_parse_vipae(buf, proto, addr))) - return rc; - - qeth_del_vipa(card, proto, addr); - - return count; -} - -static ssize_t -qeth_dev_vipa_del4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_vipa_del_store(buf, count, card, QETH_PROT_IPV4); -} - -static QETH_DEVICE_ATTR(vipa_del4, del4, 0200, NULL, - qeth_dev_vipa_del4_store); - -#ifdef CONFIG_QETH_IPV6 -static ssize_t -qeth_dev_vipa_add6_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_vipa_add_show(buf, card, QETH_PROT_IPV6); -} - -static ssize_t -qeth_dev_vipa_add6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_vipa_add_store(buf, count, card, QETH_PROT_IPV6); -} - -static QETH_DEVICE_ATTR(vipa_add6, add6, 0644, - qeth_dev_vipa_add6_show, - qeth_dev_vipa_add6_store); - -static ssize_t -qeth_dev_vipa_del6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - if (qeth_check_layer2(card)) - return -EPERM; - - return qeth_dev_vipa_del_store(buf, count, card, QETH_PROT_IPV6); -} - -static QETH_DEVICE_ATTR(vipa_del6, del6, 0200, NULL, - qeth_dev_vipa_del6_store); -#endif /* CONFIG_QETH_IPV6 */ - -static struct device_attribute * qeth_vipa_device_attrs[] = { - &dev_attr_vipa_add4, - &dev_attr_vipa_del4, -#ifdef CONFIG_QETH_IPV6 - &dev_attr_vipa_add6, - &dev_attr_vipa_del6, -#endif - NULL, -}; - -static struct attribute_group qeth_device_vipa_group = { - .name = "vipa", - .attrs = (struct attribute **)qeth_vipa_device_attrs, -}; - -static ssize_t -qeth_dev_rxip_add_show(char *buf, struct qeth_card *card, - enum qeth_prot_versions proto) -{ - struct qeth_ipaddr *ipaddr; - char addr_str[40]; - int entry_len; /* length of 1 entry string, differs between v4 and v6 */ - unsigned long flags; - int i = 0; - - if (qeth_check_layer2(card)) - return -EPERM; - - entry_len = (proto == QETH_PROT_IPV4)? 12 : 40; - entry_len += 2; /* \n + terminator */ - spin_lock_irqsave(&card->ip_lock, flags); - list_for_each_entry(ipaddr, &card->ip_list, entry){ - if (ipaddr->proto != proto) - continue; - if (ipaddr->type != QETH_IP_TYPE_RXIP) - continue; - /* String must not be longer than PAGE_SIZE. So we check if - * string length gets near PAGE_SIZE. Then we can savely display - * the next IPv6 address (worst case, compared to IPv4) */ - if ((PAGE_SIZE - i) <= entry_len) - break; - qeth_ipaddr_to_string(proto, (const u8 *)&ipaddr->u, addr_str); - i += snprintf(buf + i, PAGE_SIZE - i, "%s\n", addr_str); - } - spin_unlock_irqrestore(&card->ip_lock, flags); - i += snprintf(buf + i, PAGE_SIZE - i, "\n"); - - return i; -} - -static ssize_t -qeth_dev_rxip_add4_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_rxip_add_show(buf, card, QETH_PROT_IPV4); -} - -static int -qeth_parse_rxipe(const char* buf, enum qeth_prot_versions proto, - u8 *addr) -{ - if (qeth_string_to_ipaddr(buf, proto, addr)){ - PRINT_WARN("Invalid IP address format!\n"); - return -EINVAL; - } - return 0; -} - -static ssize_t -qeth_dev_rxip_add_store(const char *buf, size_t count, - struct qeth_card *card, enum qeth_prot_versions proto) -{ - u8 addr[16] = {0, }; - int rc; - - if (qeth_check_layer2(card)) - return -EPERM; - if ((rc = qeth_parse_rxipe(buf, proto, addr))) - return rc; - - if ((rc = qeth_add_rxip(card, proto, addr))) - return rc; - - return count; -} - -static ssize_t -qeth_dev_rxip_add4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_rxip_add_store(buf, count, card, QETH_PROT_IPV4); -} - -static QETH_DEVICE_ATTR(rxip_add4, add4, 0644, - qeth_dev_rxip_add4_show, - qeth_dev_rxip_add4_store); - -static ssize_t -qeth_dev_rxip_del_store(const char *buf, size_t count, - struct qeth_card *card, enum qeth_prot_versions proto) -{ - u8 addr[16]; - int rc; - - if (qeth_check_layer2(card)) - return -EPERM; - if ((rc = qeth_parse_rxipe(buf, proto, addr))) - return rc; - - qeth_del_rxip(card, proto, addr); - - return count; -} - -static ssize_t -qeth_dev_rxip_del4_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_rxip_del_store(buf, count, card, QETH_PROT_IPV4); -} - -static QETH_DEVICE_ATTR(rxip_del4, del4, 0200, NULL, - qeth_dev_rxip_del4_store); - -#ifdef CONFIG_QETH_IPV6 -static ssize_t -qeth_dev_rxip_add6_show(struct device *dev, struct device_attribute *attr, char *buf) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_rxip_add_show(buf, card, QETH_PROT_IPV6); -} - -static ssize_t -qeth_dev_rxip_add6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_rxip_add_store(buf, count, card, QETH_PROT_IPV6); -} - -static QETH_DEVICE_ATTR(rxip_add6, add6, 0644, - qeth_dev_rxip_add6_show, - qeth_dev_rxip_add6_store); - -static ssize_t -qeth_dev_rxip_del6_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) -{ - struct qeth_card *card = dev->driver_data; - - if (!card) - return -EINVAL; - - return qeth_dev_rxip_del_store(buf, count, card, QETH_PROT_IPV6); -} - -static QETH_DEVICE_ATTR(rxip_del6, del6, 0200, NULL, - qeth_dev_rxip_del6_store); -#endif /* CONFIG_QETH_IPV6 */ - -static struct device_attribute * qeth_rxip_device_attrs[] = { - &dev_attr_rxip_add4, - &dev_attr_rxip_del4, -#ifdef CONFIG_QETH_IPV6 - &dev_attr_rxip_add6, - &dev_attr_rxip_del6, -#endif - NULL, -}; - -static struct attribute_group qeth_device_rxip_group = { - .name = "rxip", - .attrs = (struct attribute **)qeth_rxip_device_attrs, -}; - -int -qeth_create_device_attributes(struct device *dev) -{ - int ret; - struct qeth_card *card = dev->driver_data; - - if (card->info.type == QETH_CARD_TYPE_OSN) - return sysfs_create_group(&dev->kobj, - &qeth_osn_device_attr_group); - - if ((ret = sysfs_create_group(&dev->kobj, &qeth_device_attr_group))) - return ret; - if ((ret = sysfs_create_group(&dev->kobj, &qeth_device_ipato_group))){ - sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); - return ret; - } - if ((ret = sysfs_create_group(&dev->kobj, &qeth_device_vipa_group))){ - sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); - sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); - return ret; - } - if ((ret = sysfs_create_group(&dev->kobj, &qeth_device_rxip_group))){ - sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); - sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); - sysfs_remove_group(&dev->kobj, &qeth_device_vipa_group); - return ret; - } - if ((ret = sysfs_create_group(&dev->kobj, &qeth_device_blkt_group))){ - sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); - sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); - sysfs_remove_group(&dev->kobj, &qeth_device_vipa_group); - sysfs_remove_group(&dev->kobj, &qeth_device_rxip_group); - return ret; - } - return 0; -} - -void -qeth_remove_device_attributes(struct device *dev) -{ - struct qeth_card *card = dev->driver_data; - - if (card->info.type == QETH_CARD_TYPE_OSN) { - sysfs_remove_group(&dev->kobj, &qeth_osn_device_attr_group); - return; - } - sysfs_remove_group(&dev->kobj, &qeth_device_attr_group); - sysfs_remove_group(&dev->kobj, &qeth_device_ipato_group); - sysfs_remove_group(&dev->kobj, &qeth_device_vipa_group); - sysfs_remove_group(&dev->kobj, &qeth_device_rxip_group); - sysfs_remove_group(&dev->kobj, &qeth_device_blkt_group); -} - -/**********************/ -/* DRIVER ATTRIBUTES */ -/**********************/ -static ssize_t -qeth_driver_group_store(struct device_driver *ddrv, const char *buf, - size_t count) -{ - const char *start, *end; - char bus_ids[3][BUS_ID_SIZE], *argv[3]; - int i; - int err; - - start = buf; - for (i = 0; i < 3; i++) { - static const char delim[] = { ',', ',', '\n' }; - int len; - - if (!(end = strchr(start, delim[i]))) - return -EINVAL; - len = min_t(ptrdiff_t, BUS_ID_SIZE, end - start); - strncpy(bus_ids[i], start, len); - bus_ids[i][len] = '\0'; - start = end + 1; - argv[i] = bus_ids[i]; - } - err = ccwgroup_create(qeth_root_dev, qeth_ccwgroup_driver.driver_id, - &qeth_ccw_driver, 3, argv); - if (err) - return err; - else - return count; -} - - -static DRIVER_ATTR(group, 0200, NULL, qeth_driver_group_store); - -static ssize_t -qeth_driver_notifier_register_store(struct device_driver *ddrv, const char *buf, - size_t count) -{ - int rc; - int signum; - char *tmp, *tmp2; - - tmp = strsep((char **) &buf, "\n"); - if (!strncmp(tmp, "unregister", 10)){ - if ((rc = qeth_notifier_unregister(current))) - return rc; - return count; - } - - signum = simple_strtoul(tmp, &tmp2, 10); - if ((signum < 0) || (signum > 32)){ - PRINT_WARN("Signal number %d is out of range\n", signum); - return -EINVAL; - } - if ((rc = qeth_notifier_register(current, signum))) - return rc; - - return count; -} - -static DRIVER_ATTR(notifier_register, 0200, NULL, - qeth_driver_notifier_register_store); - -int -qeth_create_driver_attributes(void) -{ - int rc; - - if ((rc = driver_create_file(&qeth_ccwgroup_driver.driver, - &driver_attr_group))) - return rc; - return driver_create_file(&qeth_ccwgroup_driver.driver, - &driver_attr_notifier_register); -} - -void -qeth_remove_driver_attributes(void) -{ - driver_remove_file(&qeth_ccwgroup_driver.driver, - &driver_attr_group); - driver_remove_file(&qeth_ccwgroup_driver.driver, - &driver_attr_notifier_register); -} diff --git a/drivers/s390/net/qeth_tso.h b/drivers/s390/net/qeth_tso.h deleted file mode 100644 index c20e923cf9ad..000000000000 --- a/drivers/s390/net/qeth_tso.h +++ /dev/null @@ -1,148 +0,0 @@ -/* - * linux/drivers/s390/net/qeth_tso.h - * - * Header file for qeth TCP Segmentation Offload support. - * - * Copyright 2004 IBM Corporation - * - * Author(s): Frank Pavlic - * - */ -#ifndef __QETH_TSO_H__ -#define __QETH_TSO_H__ - -#include -#include -#include -#include -#include -#include "qeth.h" -#include "qeth_mpc.h" - - -static inline struct qeth_hdr_tso * -qeth_tso_prepare_skb(struct qeth_card *card, struct sk_buff **skb) -{ - QETH_DBF_TEXT(trace, 5, "tsoprsk"); - return qeth_push_skb(card, *skb, sizeof(struct qeth_hdr_tso)); -} - -/** - * fill header for a TSO packet - */ -static inline void -qeth_tso_fill_header(struct qeth_card *card, struct sk_buff *skb) -{ - struct qeth_hdr_tso *hdr; - struct tcphdr *tcph; - struct iphdr *iph; - - QETH_DBF_TEXT(trace, 5, "tsofhdr"); - - hdr = (struct qeth_hdr_tso *) skb->data; - iph = ip_hdr(skb); - tcph = tcp_hdr(skb); - /*fix header to TSO values ...*/ - hdr->hdr.hdr.l3.id = QETH_HEADER_TYPE_TSO; - /*set values which are fix for the first approach ...*/ - hdr->ext.hdr_tot_len = (__u16) sizeof(struct qeth_hdr_ext_tso); - hdr->ext.imb_hdr_no = 1; - hdr->ext.hdr_type = 1; - hdr->ext.hdr_version = 1; - hdr->ext.hdr_len = 28; - /*insert non-fix values */ - hdr->ext.mss = skb_shinfo(skb)->gso_size; - hdr->ext.dg_hdr_len = (__u16)(iph->ihl*4 + tcph->doff*4); - hdr->ext.payload_len = (__u16)(skb->len - hdr->ext.dg_hdr_len - - sizeof(struct qeth_hdr_tso)); -} - -/** - * change some header values as requested by hardware - */ -static inline void -qeth_tso_set_tcpip_header(struct qeth_card *card, struct sk_buff *skb) -{ - struct iphdr *iph = ip_hdr(skb); - struct ipv6hdr *ip6h = ipv6_hdr(skb); - struct tcphdr *tcph = tcp_hdr(skb); - - tcph->check = 0; - if (skb->protocol == ETH_P_IPV6) { - ip6h->payload_len = 0; - tcph->check = ~csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr, - 0, IPPROTO_TCP, 0); - return; - } - /*OSA want us to set these values ...*/ - tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, - 0, IPPROTO_TCP, 0); - iph->tot_len = 0; - iph->check = 0; -} - -static inline int -qeth_tso_prepare_packet(struct qeth_card *card, struct sk_buff *skb, - int ipv, int cast_type) -{ - struct qeth_hdr_tso *hdr; - - QETH_DBF_TEXT(trace, 5, "tsoprep"); - - hdr = (struct qeth_hdr_tso *) qeth_tso_prepare_skb(card, &skb); - if (hdr == NULL) { - QETH_DBF_TEXT(trace, 4, "tsoperr"); - return -ENOMEM; - } - memset(hdr, 0, sizeof(struct qeth_hdr_tso)); - /*fill first 32 bytes of qdio header as used - *FIXME: TSO has two struct members - * with different names but same size - * */ - qeth_fill_header(card, &hdr->hdr, skb, ipv, cast_type); - qeth_tso_fill_header(card, skb); - qeth_tso_set_tcpip_header(card, skb); - return 0; -} - -static inline void -__qeth_fill_buffer_frag(struct sk_buff *skb, struct qdio_buffer *buffer, - int is_tso, int *next_element_to_fill) -{ - struct skb_frag_struct *frag; - int fragno; - unsigned long addr; - int element, cnt, dlen; - - fragno = skb_shinfo(skb)->nr_frags; - element = *next_element_to_fill; - dlen = 0; - - if (is_tso) - buffer->element[element].flags = - SBAL_FLAGS_MIDDLE_FRAG; - else - buffer->element[element].flags = - SBAL_FLAGS_FIRST_FRAG; - if ( (dlen = (skb->len - skb->data_len)) ) { - buffer->element[element].addr = skb->data; - buffer->element[element].length = dlen; - element++; - } - for (cnt = 0; cnt < fragno; cnt++) { - frag = &skb_shinfo(skb)->frags[cnt]; - addr = (page_to_pfn(frag->page) << PAGE_SHIFT) + - frag->page_offset; - buffer->element[element].addr = (char *)addr; - buffer->element[element].length = frag->size; - if (cnt < (fragno - 1)) - buffer->element[element].flags = - SBAL_FLAGS_MIDDLE_FRAG; - else - buffer->element[element].flags = - SBAL_FLAGS_LAST_FRAG; - element++; - } - *next_element_to_fill = element; -} -#endif /* __QETH_TSO_H__ */ -- cgit v1.2.3-59-g8ed1b From c346dca10840a874240c78efe3f39acf4312a1f2 Mon Sep 17 00:00:00 2001 From: YOSHIFUJI Hideaki Date: Tue, 25 Mar 2008 21:47:49 +0900 Subject: [NET] NETNS: Omit net_device->nd_net without CONFIG_NET_NS. Introduce per-net_device inlines: dev_net(), dev_net_set(). Without CONFIG_NET_NS, no namespace other than &init_net exists. Let's explicitly define them to help compiler optimizations. Signed-off-by: YOSHIFUJI Hideaki --- arch/ia64/hp/sim/simeth.c | 2 +- drivers/block/aoe/aoenet.c | 2 +- drivers/net/bonding/bond_3ad.c | 2 +- drivers/net/bonding/bond_alb.c | 2 +- drivers/net/bonding/bond_main.c | 6 ++--- drivers/net/hamradio/bpqether.c | 4 ++-- drivers/net/loopback.c | 2 +- drivers/net/macvlan.c | 2 +- drivers/net/pppoe.c | 6 ++--- drivers/net/veth.c | 2 +- drivers/net/via-velocity.c | 2 +- drivers/net/wan/dlci.c | 2 +- drivers/net/wan/hdlc.c | 4 ++-- drivers/net/wan/lapbether.c | 4 ++-- drivers/net/wan/syncppp.c | 2 +- drivers/s390/net/qeth_l3_main.c | 2 +- include/linux/inetdevice.h | 6 ++--- include/linux/netdevice.h | 25 ++++++++++++++++++++- net/8021q/vlan.c | 2 +- net/8021q/vlan_dev.c | 2 +- net/appletalk/aarp.c | 4 ++-- net/appletalk/ddp.c | 6 ++--- net/atm/clip.c | 2 +- net/atm/mpc.c | 2 +- net/ax25/af_ax25.c | 2 +- net/ax25/ax25_in.c | 2 +- net/bridge/br_notify.c | 2 +- net/bridge/br_stp_bpdu.c | 2 +- net/can/af_can.c | 4 ++-- net/can/bcm.c | 2 +- net/can/raw.c | 2 +- net/core/dev.c | 22 +++++++++---------- net/core/dst.c | 2 +- net/core/fib_rules.c | 2 +- net/core/neighbour.c | 12 +++++----- net/core/pktgen.c | 2 +- net/core/rtnetlink.c | 4 ++-- net/decnet/af_decnet.c | 2 +- net/decnet/dn_route.c | 2 +- net/econet/af_econet.c | 4 ++-- net/ipv4/arp.c | 14 ++++++------ net/ipv4/devinet.c | 10 ++++----- net/ipv4/fib_frontend.c | 12 +++++----- net/ipv4/icmp.c | 8 +++---- net/ipv4/igmp.c | 16 +++++++------- net/ipv4/ip_fragment.c | 2 +- net/ipv4/ip_gre.c | 2 +- net/ipv4/ip_input.c | 6 ++--- net/ipv4/ip_options.c | 2 +- net/ipv4/ipconfig.c | 4 ++-- net/ipv4/ipmr.c | 2 +- net/ipv4/netfilter/ip_queue.c | 2 +- net/ipv4/netfilter/ipt_MASQUERADE.c | 2 +- net/ipv4/raw.c | 4 ++-- net/ipv4/route.c | 28 +++++++++++------------ net/ipv4/tcp_ipv4.c | 6 ++--- net/ipv4/udp.c | 4 ++-- net/ipv4/xfrm4_policy.c | 2 +- net/ipv6/addrconf.c | 44 ++++++++++++++++++------------------- net/ipv6/icmp.c | 4 ++-- net/ipv6/ip6_output.c | 2 +- net/ipv6/mcast.c | 6 ++--- net/ipv6/ndisc.c | 24 ++++++++++---------- net/ipv6/netfilter/ip6_queue.c | 2 +- net/ipv6/proc.c | 2 +- net/ipv6/raw.c | 4 ++-- net/ipv6/reassembly.c | 2 +- net/ipv6/route.c | 40 ++++++++++++++++----------------- net/ipv6/tcp_ipv6.c | 10 ++++----- net/ipv6/udp.c | 4 ++-- net/ipv6/xfrm6_policy.c | 2 +- net/ipx/af_ipx.c | 4 ++-- net/irda/irlap_frame.c | 2 +- net/llc/llc_input.c | 2 +- net/netfilter/core.c | 2 +- net/netfilter/nfnetlink_queue.c | 2 +- net/netlabel/netlabel_unlabeled.c | 2 +- net/netrom/af_netrom.c | 2 +- net/packet/af_packet.c | 8 +++---- net/rose/af_rose.c | 2 +- net/sctp/protocol.c | 2 +- net/tipc/eth_media.c | 4 ++-- net/wireless/wext.c | 2 +- net/x25/af_x25.c | 2 +- net/x25/x25_dev.c | 2 +- net/xfrm/xfrm_policy.c | 4 ++-- security/selinux/netif.c | 2 +- 87 files changed, 251 insertions(+), 228 deletions(-) (limited to 'drivers/s390') diff --git a/arch/ia64/hp/sim/simeth.c b/arch/ia64/hp/sim/simeth.c index 969fe9f443c4..3d47839a0c48 100644 --- a/arch/ia64/hp/sim/simeth.c +++ b/arch/ia64/hp/sim/simeth.c @@ -294,7 +294,7 @@ simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr) return NOTIFY_DONE; } - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if ( event != NETDEV_UP && event != NETDEV_DOWN ) return NOTIFY_DONE; diff --git a/drivers/block/aoe/aoenet.c b/drivers/block/aoe/aoenet.c index 8460ef736d56..18d243c73eee 100644 --- a/drivers/block/aoe/aoenet.c +++ b/drivers/block/aoe/aoenet.c @@ -115,7 +115,7 @@ aoenet_rcv(struct sk_buff *skb, struct net_device *ifp, struct packet_type *pt, struct aoe_hdr *h; u32 n; - if (ifp->nd_net != &init_net) + if (dev_net(ifp) != &init_net) goto exit; skb = skb_share_check(skb, GFP_ATOMIC); diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c index cb3c6faa7888..457d81f73e39 100644 --- a/drivers/net/bonding/bond_3ad.c +++ b/drivers/net/bonding/bond_3ad.c @@ -2429,7 +2429,7 @@ int bond_3ad_lacpdu_recv(struct sk_buff *skb, struct net_device *dev, struct pac struct slave *slave = NULL; int ret = NET_RX_DROP; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto out; if (!(dev->flags & IFF_MASTER)) diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index b57bc9467dbe..b986dacf5d33 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -345,7 +345,7 @@ static int rlb_arp_recv(struct sk_buff *skb, struct net_device *bond_dev, struct struct arp_pkt *arp = (struct arp_pkt *)skb->data; int res = NET_RX_DROP; - if (bond_dev->nd_net != &init_net) + if (dev_net(bond_dev) != &init_net) goto out; if (!(bond_dev->flags & IFF_MASTER)) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 5fc9d8d58ece..ac688fcb27d7 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -2629,7 +2629,7 @@ static int bond_arp_rcv(struct sk_buff *skb, struct net_device *dev, struct pack unsigned char *arp_ptr; __be32 sip, tip; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto out; if (!(dev->priv_flags & IFF_BONDING) || !(dev->flags & IFF_MASTER)) @@ -3470,7 +3470,7 @@ static int bond_netdev_event(struct notifier_block *this, unsigned long event, v { struct net_device *event_dev = (struct net_device *)ptr; - if (event_dev->nd_net != &init_net) + if (dev_net(event_dev) != &init_net) return NOTIFY_DONE; dprintk("event_dev: %s, event: %lx\n", @@ -3508,7 +3508,7 @@ static int bond_inetaddr_event(struct notifier_block *this, unsigned long event, struct bonding *bond, *bond_next; struct vlan_entry *vlan, *vlan_next; - if (ifa->ifa_dev->dev->nd_net != &init_net) + if (dev_net(ifa->ifa_dev->dev) != &init_net) return NOTIFY_DONE; list_for_each_entry_safe(bond, bond_next, &bond_dev_list, bond_list) { diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c index 5ddf8b0c34f9..5f4b4c6c9f76 100644 --- a/drivers/net/hamradio/bpqether.c +++ b/drivers/net/hamradio/bpqether.c @@ -172,7 +172,7 @@ static int bpq_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_ty struct ethhdr *eth; struct bpqdev *bpq; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) @@ -553,7 +553,7 @@ static int bpq_device_event(struct notifier_block *this,unsigned long event, voi { struct net_device *dev = (struct net_device *)ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (!dev_is_ethdev(dev)) diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c index f2a6e7132241..41b774baac4d 100644 --- a/drivers/net/loopback.c +++ b/drivers/net/loopback.c @@ -258,7 +258,7 @@ static __net_init int loopback_net_init(struct net *net) if (!dev) goto out; - dev->nd_net = net; + dev_net_set(dev, net); err = register_netdev(dev); if (err) goto out_free_netdev; diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c index f651a816b280..2056cfc624dc 100644 --- a/drivers/net/macvlan.c +++ b/drivers/net/macvlan.c @@ -402,7 +402,7 @@ static int macvlan_newlink(struct net_device *dev, if (!tb[IFLA_LINK]) return -EINVAL; - lowerdev = __dev_get_by_index(dev->nd_net, nla_get_u32(tb[IFLA_LINK])); + lowerdev = __dev_get_by_index(dev_net(dev), nla_get_u32(tb[IFLA_LINK])); if (lowerdev == NULL) return -ENODEV; diff --git a/drivers/net/pppoe.c b/drivers/net/pppoe.c index ac0ac98b19cd..4fad4ddb3504 100644 --- a/drivers/net/pppoe.c +++ b/drivers/net/pppoe.c @@ -301,7 +301,7 @@ static int pppoe_device_event(struct notifier_block *this, { struct net_device *dev = (struct net_device *) ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* Only look at sockets that are using this specific device. */ @@ -392,7 +392,7 @@ static int pppoe_rcv(struct sk_buff *skb, if (!(skb = skb_share_check(skb, GFP_ATOMIC))) goto out; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr))) @@ -424,7 +424,7 @@ static int pppoe_disc_rcv(struct sk_buff *skb, struct pppoe_hdr *ph; struct pppox_sock *po; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto abort; if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr))) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index e2ad98bee6e7..31cd817f33f9 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -375,7 +375,7 @@ static int veth_newlink(struct net_device *dev, else snprintf(ifname, IFNAMSIZ, DRV_NAME "%%d"); - peer = rtnl_create_link(dev->nd_net, ifname, &veth_link_ops, tbp); + peer = rtnl_create_link(dev_net(dev), ifname, &veth_link_ops, tbp); if (IS_ERR(peer)) return PTR_ERR(peer); diff --git a/drivers/net/via-velocity.c b/drivers/net/via-velocity.c index 1525e8a89844..ed1afaf683a4 100644 --- a/drivers/net/via-velocity.c +++ b/drivers/net/via-velocity.c @@ -3464,7 +3464,7 @@ static int velocity_netdev_event(struct notifier_block *nb, unsigned long notifi struct velocity_info *vptr; unsigned long flags; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; spin_lock_irqsave(&velocity_dev_list_lock, flags); diff --git a/drivers/net/wan/dlci.c b/drivers/net/wan/dlci.c index 96b232446c0b..b14242768fad 100644 --- a/drivers/net/wan/dlci.c +++ b/drivers/net/wan/dlci.c @@ -517,7 +517,7 @@ static int dlci_dev_event(struct notifier_block *unused, { struct net_device *dev = (struct net_device *) ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event == NETDEV_UNREGISTER) { diff --git a/drivers/net/wan/hdlc.c b/drivers/net/wan/hdlc.c index 39951d0c34d6..9a83c9d5b8cf 100644 --- a/drivers/net/wan/hdlc.c +++ b/drivers/net/wan/hdlc.c @@ -68,7 +68,7 @@ static int hdlc_rcv(struct sk_buff *skb, struct net_device *dev, { struct hdlc_device *hdlc = dev_to_hdlc(dev); - if (dev->nd_net != &init_net) { + if (dev_net(dev) != &init_net) { kfree_skb(skb); return 0; } @@ -105,7 +105,7 @@ static int hdlc_device_event(struct notifier_block *this, unsigned long event, unsigned long flags; int on; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (dev->get_stats != hdlc_get_stats) diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c index fb37b8095231..629c909e05f9 100644 --- a/drivers/net/wan/lapbether.c +++ b/drivers/net/wan/lapbether.c @@ -91,7 +91,7 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe int len, err; struct lapbethdev *lapbeth; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) @@ -393,7 +393,7 @@ static int lapbeth_device_event(struct notifier_block *this, struct lapbethdev *lapbeth; struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (!dev_is_ethdev(dev)) diff --git a/drivers/net/wan/syncppp.c b/drivers/net/wan/syncppp.c index 61e24b7a45a3..29b4b94e4947 100644 --- a/drivers/net/wan/syncppp.c +++ b/drivers/net/wan/syncppp.c @@ -1444,7 +1444,7 @@ static void sppp_print_bytes (u_char *p, u16 len) static int sppp_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *p, struct net_device *orig_dev) { - if (dev->nd_net != &init_net) { + if (dev_net(dev) != &init_net) { kfree_skb(skb); return 0; } diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index a856cb47fc78..21c439046b3c 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -3250,7 +3250,7 @@ static int qeth_l3_ip_event(struct notifier_block *this, struct qeth_ipaddr *addr; struct qeth_card *card; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; QETH_DBF_TEXT(trace, 3, "ipevent"); diff --git a/include/linux/inetdevice.h b/include/linux/inetdevice.h index da05ab47ff2f..7009b0cdd06f 100644 --- a/include/linux/inetdevice.h +++ b/include/linux/inetdevice.h @@ -70,13 +70,13 @@ static inline void ipv4_devconf_setall(struct in_device *in_dev) ipv4_devconf_set((in_dev), NET_IPV4_CONF_ ## attr, (val)) #define IN_DEV_ANDCONF(in_dev, attr) \ - (IPV4_DEVCONF_ALL(in_dev->dev->nd_net, attr) && \ + (IPV4_DEVCONF_ALL(dev_net(in_dev->dev), attr) && \ IN_DEV_CONF_GET((in_dev), attr)) #define IN_DEV_ORCONF(in_dev, attr) \ - (IPV4_DEVCONF_ALL(in_dev->dev->nd_net, attr) || \ + (IPV4_DEVCONF_ALL(dev_net(in_dev->dev), attr) || \ IN_DEV_CONF_GET((in_dev), attr)) #define IN_DEV_MAXCONF(in_dev, attr) \ - (max(IPV4_DEVCONF_ALL(in_dev->dev->nd_net, attr), \ + (max(IPV4_DEVCONF_ALL(dev_net(in_dev->dev), attr), \ IN_DEV_CONF_GET((in_dev), attr))) #define IN_DEV_FORWARD(in_dev) IN_DEV_CONF_GET((in_dev), FORWARDING) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index ced61f87660e..d146be40f46c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -708,8 +708,10 @@ struct net_device void (*poll_controller)(struct net_device *dev); #endif +#ifdef CONFIG_NET_NS /* Network namespace this network device is inside */ struct net *nd_net; +#endif /* bridge stuff */ struct net_bridge_port *br_port; @@ -737,6 +739,27 @@ struct net_device #define NETDEV_ALIGN 32 #define NETDEV_ALIGN_CONST (NETDEV_ALIGN - 1) +/* + * Net namespace inlines + */ +static inline +struct net *dev_net(const struct net_device *dev) +{ +#ifdef CONFIG_NET_NS + return dev->nd_net; +#else + return &init_net; +#endif +} + +static inline +void dev_net_set(struct net_device *dev, const struct net *net) +{ +#ifdef CONFIG_NET_NS + dev->nd_dev = net; +#endif +} + /** * netdev_priv - access network device private data * @dev: network device @@ -813,7 +836,7 @@ static inline struct net_device *next_net_device(struct net_device *dev) struct list_head *lh; struct net *net; - net = dev->nd_net; + net = dev_net(dev); lh = dev->dev_list.next; return lh == &net->dev_base_head ? NULL : net_device_entry(lh); } diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index dbc81b965096..c35dc230365c 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -382,7 +382,7 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event, int i, flgs; struct net_device *vlandev; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (!grp) diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index 1e5c9904571d..e536162b1ebc 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -153,7 +153,7 @@ int vlan_skb_recv(struct sk_buff *skb, struct net_device *dev, struct net_device_stats *stats; unsigned short vlan_TCI; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto err_free; skb = skb_share_check(skb, GFP_ATOMIC); diff --git a/net/appletalk/aarp.c b/net/appletalk/aarp.c index 61166f66479f..25aa37ce9430 100644 --- a/net/appletalk/aarp.c +++ b/net/appletalk/aarp.c @@ -333,7 +333,7 @@ static int aarp_device_event(struct notifier_block *this, unsigned long event, struct net_device *dev = ptr; int ct; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event == NETDEV_DOWN) { @@ -716,7 +716,7 @@ static int aarp_rcv(struct sk_buff *skb, struct net_device *dev, struct atalk_addr sa, *ma, da; struct atalk_iface *ifa; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto out0; /* We only do Ethernet SNAP AARP. */ diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c index 3be55c8ca4ef..44cd42f7786b 100644 --- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -648,7 +648,7 @@ static int ddp_device_event(struct notifier_block *this, unsigned long event, { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event == NETDEV_DOWN) @@ -1405,7 +1405,7 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, int origlen; __u16 len_hops; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto freeit; /* Don't mangle buffer if shared */ @@ -1493,7 +1493,7 @@ freeit: static int ltalk_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt, struct net_device *orig_dev) { - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto freeit; /* Expand any short form frames */ diff --git a/net/atm/clip.c b/net/atm/clip.c index e82da6746723..6f8223ebf551 100644 --- a/net/atm/clip.c +++ b/net/atm/clip.c @@ -612,7 +612,7 @@ static int clip_device_event(struct notifier_block *this, unsigned long event, { struct net_device *dev = arg; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event == NETDEV_UNREGISTER) { diff --git a/net/atm/mpc.c b/net/atm/mpc.c index 9c7f712fc7e9..9db332e7a6c0 100644 --- a/net/atm/mpc.c +++ b/net/atm/mpc.c @@ -964,7 +964,7 @@ static int mpoa_event_listener(struct notifier_block *mpoa_notifier, unsigned lo dev = (struct net_device *)dev_ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (dev->name == NULL || strncmp(dev->name, "lec", 3)) diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c index 48bfcc741f25..ee9dd83e7561 100644 --- a/net/ax25/af_ax25.c +++ b/net/ax25/af_ax25.c @@ -116,7 +116,7 @@ static int ax25_device_event(struct notifier_block *this, unsigned long event, { struct net_device *dev = (struct net_device *)ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* Reject non AX.25 devices */ diff --git a/net/ax25/ax25_in.c b/net/ax25/ax25_in.c index d1be080dcb25..33790a8efbc8 100644 --- a/net/ax25/ax25_in.c +++ b/net/ax25/ax25_in.c @@ -451,7 +451,7 @@ int ax25_kiss_rcv(struct sk_buff *skb, struct net_device *dev, skb->sk = NULL; /* Initially we don't know who it's for */ skb->destructor = NULL; /* Who initializes this, dammit?! */ - if (dev->nd_net != &init_net) { + if (dev_net(dev) != &init_net) { kfree_skb(skb); return 0; } diff --git a/net/bridge/br_notify.c b/net/bridge/br_notify.c index 07ac3ae68d8f..00644a544e3c 100644 --- a/net/bridge/br_notify.c +++ b/net/bridge/br_notify.c @@ -37,7 +37,7 @@ static int br_device_event(struct notifier_block *unused, unsigned long event, v struct net_bridge_port *p = dev->br_port; struct net_bridge *br; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* not a port of a bridge */ diff --git a/net/bridge/br_stp_bpdu.c b/net/bridge/br_stp_bpdu.c index 0edbd2a1c3f3..8deab645ef75 100644 --- a/net/bridge/br_stp_bpdu.c +++ b/net/bridge/br_stp_bpdu.c @@ -142,7 +142,7 @@ int br_stp_rcv(struct sk_buff *skb, struct net_device *dev, struct net_bridge *br; const unsigned char *buf; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto err; if (!p) diff --git a/net/can/af_can.c b/net/can/af_can.c index 36b9f22ed83a..2759b76f731c 100644 --- a/net/can/af_can.c +++ b/net/can/af_can.c @@ -599,7 +599,7 @@ static int can_rcv(struct sk_buff *skb, struct net_device *dev, struct dev_rcv_lists *d; int matches; - if (dev->type != ARPHRD_CAN || dev->nd_net != &init_net) { + if (dev->type != ARPHRD_CAN || dev_net(dev) != &init_net) { kfree_skb(skb); return 0; } @@ -710,7 +710,7 @@ static int can_notifier(struct notifier_block *nb, unsigned long msg, struct net_device *dev = (struct net_device *)data; struct dev_rcv_lists *d; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (dev->type != ARPHRD_CAN) diff --git a/net/can/bcm.c b/net/can/bcm.c index bd4282dae754..e9f99b2c6bc9 100644 --- a/net/can/bcm.c +++ b/net/can/bcm.c @@ -1285,7 +1285,7 @@ static int bcm_notifier(struct notifier_block *nb, unsigned long msg, struct bcm_op *op; int notify_enodev = 0; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (dev->type != ARPHRD_CAN) diff --git a/net/can/raw.c b/net/can/raw.c index 94cd7f27c444..ead50c7c0d40 100644 --- a/net/can/raw.c +++ b/net/can/raw.c @@ -210,7 +210,7 @@ static int raw_notifier(struct notifier_block *nb, struct raw_sock *ro = container_of(nb, struct raw_sock, notifier); struct sock *sk = &ro->sk; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (dev->type != ARPHRD_CAN) diff --git a/net/core/dev.c b/net/core/dev.c index aebd08606040..812534828914 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -216,7 +216,7 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex) /* Device list insertion */ static int list_netdevice(struct net_device *dev) { - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); ASSERT_RTNL(); @@ -852,8 +852,8 @@ int dev_alloc_name(struct net_device *dev, const char *name) struct net *net; int ret; - BUG_ON(!dev->nd_net); - net = dev->nd_net; + BUG_ON(!dev_net(dev)); + net = dev_net(dev); ret = __dev_alloc_name(net, name, buf); if (ret >= 0) strlcpy(dev->name, buf, IFNAMSIZ); @@ -877,9 +877,9 @@ int dev_change_name(struct net_device *dev, char *newname) struct net *net; ASSERT_RTNL(); - BUG_ON(!dev->nd_net); + BUG_ON(!dev_net(dev)); - net = dev->nd_net; + net = dev_net(dev); if (dev->flags & IFF_UP) return -EBUSY; @@ -2615,7 +2615,7 @@ static int ptype_seq_show(struct seq_file *seq, void *v) if (v == SEQ_START_TOKEN) seq_puts(seq, "Type Device Function\n"); - else if (pt->dev == NULL || pt->dev->nd_net == seq_file_net(seq)) { + else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) { if (pt->type == htons(ETH_P_ALL)) seq_puts(seq, "ALL "); else @@ -3689,8 +3689,8 @@ int register_netdevice(struct net_device *dev) /* When net_device's are persistent, this will be fatal. */ BUG_ON(dev->reg_state != NETREG_UNINITIALIZED); - BUG_ON(!dev->nd_net); - net = dev->nd_net; + BUG_ON(!dev_net(dev)); + net = dev_net(dev); spin_lock_init(&dev->queue_lock); spin_lock_init(&dev->_xmit_lock); @@ -4011,7 +4011,7 @@ struct net_device *alloc_netdev_mq(int sizeof_priv, const char *name, dev = (struct net_device *) (((long)p + NETDEV_ALIGN_CONST) & ~NETDEV_ALIGN_CONST); dev->padded = (char *)dev - (char *)p; - dev->nd_net = &init_net; + dev_net_set(dev, &init_net); if (sizeof_priv) { dev->priv = ((char *)dev + @@ -4136,7 +4136,7 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char /* Get out if there is nothing todo */ err = 0; - if (dev->nd_net == net) + if (dev_net(dev) == net) goto out; /* Pick the destination device name, and ensure @@ -4187,7 +4187,7 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char dev_addr_discard(dev); /* Actually switch the network namespace */ - dev->nd_net = net; + dev_net_set(dev, net); /* Assign the new device name */ if (destname != dev->name) diff --git a/net/core/dst.c b/net/core/dst.c index 3a01a819ba47..694cd2a3f6d2 100644 --- a/net/core/dst.c +++ b/net/core/dst.c @@ -279,7 +279,7 @@ static inline void dst_ifdown(struct dst_entry *dst, struct net_device *dev, if (!unregister) { dst->input = dst->output = dst_discard; } else { - dst->dev = dst->dev->nd_net->loopback_dev; + dst->dev = dev_net(dst->dev)->loopback_dev; dev_hold(dst->dev); dev_put(dev); if (dst->neighbour && dst->neighbour->dev == dev) { diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c index 42ccaf5b8509..942be93a2eb0 100644 --- a/net/core/fib_rules.c +++ b/net/core/fib_rules.c @@ -618,7 +618,7 @@ static int fib_rules_event(struct notifier_block *this, unsigned long event, void *ptr) { struct net_device *dev = ptr; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct fib_rules_ops *ops; ASSERT_RTNL(); diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 23c0a10c0c37..c978bd1cd659 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -388,7 +388,7 @@ struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net, hash_val = tbl->hash(pkey, NULL); for (n = tbl->hash_buckets[hash_val & tbl->hash_mask]; n; n = n->next) { if (!memcmp(n->primary_key, pkey, key_len) && - (net == n->dev->nd_net)) { + dev_net(n->dev) == net) { neigh_hold(n); NEIGH_CACHE_STAT_INC(tbl, hits); break; @@ -1298,7 +1298,7 @@ struct neigh_parms *neigh_parms_alloc(struct net_device *dev, struct neigh_parms *p, *ref; struct net *net; - net = dev->nd_net; + net = dev_net(dev); ref = lookup_neigh_params(tbl, net, 0); if (!ref) return NULL; @@ -2050,7 +2050,7 @@ static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb, s_idx = 0; for (n = tbl->hash_buckets[h], idx = 0; n; n = n->next) { int lidx; - if (n->dev->nd_net != net) + if (dev_net(n->dev) != net) continue; lidx = idx++; if (lidx < s_idx) @@ -2155,7 +2155,7 @@ static struct neighbour *neigh_get_first(struct seq_file *seq) n = tbl->hash_buckets[bucket]; while (n) { - if (n->dev->nd_net != net) + if (dev_net(n->dev) != net) goto next; if (state->neigh_sub_iter) { loff_t fakep = 0; @@ -2198,7 +2198,7 @@ static struct neighbour *neigh_get_next(struct seq_file *seq, while (1) { while (n) { - if (n->dev->nd_net != net) + if (dev_net(n->dev) != net) goto next; if (state->neigh_sub_iter) { void *v = state->neigh_sub_iter(state, n, pos); @@ -2482,7 +2482,7 @@ static inline size_t neigh_nlmsg_size(void) static void __neigh_notify(struct neighbour *n, int type, int flags) { - struct net *net = n->dev->nd_net; + struct net *net = dev_net(n->dev); struct sk_buff *skb; int err = -ENOBUFS; diff --git a/net/core/pktgen.c b/net/core/pktgen.c index 20e63b302ba6..a803b442234c 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -1874,7 +1874,7 @@ static int pktgen_device_event(struct notifier_block *unused, { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* It is OK that we do not hold the group lock right now, diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 2bd9c5f7627d..09250a0800f6 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -972,7 +972,7 @@ struct net_device *rtnl_create_link(struct net *net, char *ifname, goto err_free; } - dev->nd_net = net; + dev_net_set(dev, net); dev->rtnl_link_ops = ops; if (tb[IFLA_MTU]) @@ -1198,7 +1198,7 @@ static int rtnl_dump_all(struct sk_buff *skb, struct netlink_callback *cb) void rtmsg_ifinfo(int type, struct net_device *dev, unsigned change) { - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct sk_buff *skb; int err = -ENOBUFS; diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c index 23fd95a7ad15..3554fb3d251c 100644 --- a/net/decnet/af_decnet.c +++ b/net/decnet/af_decnet.c @@ -2089,7 +2089,7 @@ static int dn_device_event(struct notifier_block *this, unsigned long event, { struct net_device *dev = (struct net_device *)ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; switch(event) { diff --git a/net/decnet/dn_route.c b/net/decnet/dn_route.c index 9dc0abb50eaf..0a46b6c10e51 100644 --- a/net/decnet/dn_route.c +++ b/net/decnet/dn_route.c @@ -580,7 +580,7 @@ int dn_route_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type struct dn_dev *dn = (struct dn_dev *)dev->dn_ptr; unsigned char padlen = 0; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto dump_it; if (dn == NULL) diff --git a/net/econet/af_econet.c b/net/econet/af_econet.c index bc0f6252613f..68d154480043 100644 --- a/net/econet/af_econet.c +++ b/net/econet/af_econet.c @@ -1064,7 +1064,7 @@ static int econet_rcv(struct sk_buff *skb, struct net_device *dev, struct packet struct sock *sk; struct ec_device *edev = dev->ec_ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; if (skb->pkt_type == PACKET_OTHERHOST) @@ -1121,7 +1121,7 @@ static int econet_notifier(struct notifier_block *this, unsigned long msg, void struct net_device *dev = (struct net_device *)data; struct ec_device *edev; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; switch (msg) { diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c index 832473e30b36..3ce2e137e7bc 100644 --- a/net/ipv4/arp.c +++ b/net/ipv4/arp.c @@ -242,7 +242,7 @@ static int arp_constructor(struct neighbour *neigh) return -EINVAL; } - neigh->type = inet_addr_type(dev->nd_net, addr); + neigh->type = inet_addr_type(dev_net(dev), addr); parms = in_dev->arp_parms; __neigh_parms_put(neigh->parms); @@ -341,14 +341,14 @@ static void arp_solicit(struct neighbour *neigh, struct sk_buff *skb) switch (IN_DEV_ARP_ANNOUNCE(in_dev)) { default: case 0: /* By default announce any local IP */ - if (skb && inet_addr_type(dev->nd_net, ip_hdr(skb)->saddr) == RTN_LOCAL) + if (skb && inet_addr_type(dev_net(dev), ip_hdr(skb)->saddr) == RTN_LOCAL) saddr = ip_hdr(skb)->saddr; break; case 1: /* Restrict announcements of saddr in same subnet */ if (!skb) break; saddr = ip_hdr(skb)->saddr; - if (inet_addr_type(dev->nd_net, saddr) == RTN_LOCAL) { + if (inet_addr_type(dev_net(dev), saddr) == RTN_LOCAL) { /* saddr should be known to target */ if (inet_addr_onlink(in_dev, target, saddr)) break; @@ -424,7 +424,7 @@ static int arp_filter(__be32 sip, __be32 tip, struct net_device *dev) int flag = 0; /*unsigned long now; */ - if (ip_route_output_key(dev->nd_net, &rt, &fl) < 0) + if (ip_route_output_key(dev_net(dev), &rt, &fl) < 0) return 1; if (rt->u.dst.dev != dev) { NET_INC_STATS_BH(LINUX_MIB_ARPFILTER); @@ -477,7 +477,7 @@ int arp_find(unsigned char *haddr, struct sk_buff *skb) paddr = skb->rtable->rt_gateway; - if (arp_set_predefined(inet_addr_type(dev->nd_net, paddr), haddr, paddr, dev)) + if (arp_set_predefined(inet_addr_type(dev_net(dev), paddr), haddr, paddr, dev)) return 0; n = __neigh_lookup(&arp_tbl, &paddr, dev, 1); @@ -709,7 +709,7 @@ static int arp_process(struct sk_buff *skb) u16 dev_type = dev->type; int addr_type; struct neighbour *n; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); /* arp_rcv below verifies the ARP header and verifies the device * is ARP'able. @@ -858,7 +858,7 @@ static int arp_process(struct sk_buff *skb) n = __neigh_lookup(&arp_tbl, &sip, dev, 0); - if (IPV4_DEVCONF_ALL(dev->nd_net, ARP_ACCEPT)) { + if (IPV4_DEVCONF_ALL(dev_net(dev), ARP_ACCEPT)) { /* Unsolicited ARP is not accepted by default. It is possible, that this option should be enabled for some devices (strip is candidate) diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c index 4a10dbbbe0a1..823c724a8593 100644 --- a/net/ipv4/devinet.c +++ b/net/ipv4/devinet.c @@ -165,7 +165,7 @@ static struct in_device *inetdev_init(struct net_device *dev) if (!in_dev) goto out; INIT_RCU_HEAD(&in_dev->rcu_head); - memcpy(&in_dev->cnf, dev->nd_net->ipv4.devconf_dflt, + memcpy(&in_dev->cnf, dev_net(dev)->ipv4.devconf_dflt, sizeof(in_dev->cnf)); in_dev->cnf.sysctl = NULL; in_dev->dev = dev; @@ -872,7 +872,7 @@ __be32 inet_select_addr(const struct net_device *dev, __be32 dst, int scope) { __be32 addr = 0; struct in_device *in_dev; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); rcu_read_lock(); in_dev = __in_dev_get_rcu(dev); @@ -974,7 +974,7 @@ __be32 inet_confirm_addr(struct in_device *in_dev, if (scope != RT_SCOPE_LINK) return confirm_addr_indev(in_dev, dst, local, scope); - net = in_dev->dev->nd_net; + net = dev_net(in_dev->dev); read_lock(&dev_base_lock); rcu_read_lock(); for_each_netdev(net, dev) { @@ -1203,7 +1203,7 @@ static void rtmsg_ifa(int event, struct in_ifaddr* ifa, struct nlmsghdr *nlh, int err = -ENOBUFS; struct net *net; - net = ifa->ifa_dev->dev->nd_net; + net = dev_net(ifa->ifa_dev->dev); skb = nlmsg_new(inet_nlmsg_size(), GFP_KERNEL); if (skb == NULL) goto errout; @@ -1517,7 +1517,7 @@ static void devinet_sysctl_register(struct in_device *idev) { neigh_sysctl_register(idev->dev, idev->arp_parms, NET_IPV4, NET_IPV4_NEIGH, "ipv4", NULL, NULL); - __devinet_sysctl_register(idev->dev->nd_net, idev->dev->name, + __devinet_sysctl_register(dev_net(idev->dev), idev->dev->name, idev->dev->ifindex, &idev->cnf); } diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c index 86ff2711fc95..0e4b34b07cb5 100644 --- a/net/ipv4/fib_frontend.c +++ b/net/ipv4/fib_frontend.c @@ -257,7 +257,7 @@ int fib_validate_source(__be32 src, __be32 dst, u8 tos, int oif, if (in_dev == NULL) goto e_inval; - net = dev->nd_net; + net = dev_net(dev); if (fib_lookup(net, &fl, &res)) goto last_resort; if (res.type != RTN_UNICAST) @@ -674,7 +674,7 @@ out: static void fib_magic(int cmd, int type, __be32 dst, int dst_len, struct in_ifaddr *ifa) { - struct net *net = ifa->ifa_dev->dev->nd_net; + struct net *net = dev_net(ifa->ifa_dev->dev); struct fib_table *tb; struct fib_config cfg = { .fc_protocol = RTPROT_KERNEL, @@ -801,15 +801,15 @@ static void fib_del_ifaddr(struct in_ifaddr *ifa) fib_magic(RTM_DELROUTE, RTN_LOCAL, ifa->ifa_local, 32, prim); /* Check, that this local address finally disappeared. */ - if (inet_addr_type(dev->nd_net, ifa->ifa_local) != RTN_LOCAL) { + if (inet_addr_type(dev_net(dev), ifa->ifa_local) != RTN_LOCAL) { /* And the last, but not the least thing. We must flush stray FIB entries. First of all, we scan fib_info list searching for stray nexthop entries, then ignite fib_flush. */ - if (fib_sync_down_addr(dev->nd_net, ifa->ifa_local)) - fib_flush(dev->nd_net); + if (fib_sync_down_addr(dev_net(dev), ifa->ifa_local)) + fib_flush(dev_net(dev)); } } #undef LOCAL_OK @@ -899,7 +899,7 @@ static void nl_fib_lookup_exit(struct net *net) static void fib_disable_ip(struct net_device *dev, int force) { if (fib_sync_down_dev(dev, force)) - fib_flush(dev->nd_net); + fib_flush(dev_net(dev)); rt_cache_flush(0); arp_ifdown(dev); } diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index ff9a8e643fcc..f38f093ef751 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -351,7 +351,7 @@ static void icmp_push_reply(struct icmp_bxm *icmp_param, struct sock *sk; struct sk_buff *skb; - sk = icmp_sk(rt->u.dst.dev->nd_net); + sk = icmp_sk(dev_net(rt->u.dst.dev)); if (ip_append_data(sk, icmp_glue_bits, icmp_param, icmp_param->data_len+icmp_param->head_len, icmp_param->head_len, @@ -382,7 +382,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb) { struct ipcm_cookie ipc; struct rtable *rt = skb->rtable; - struct net *net = rt->u.dst.dev->nd_net; + struct net *net = dev_net(rt->u.dst.dev); struct sock *sk = icmp_sk(net); struct inet_sock *inet = inet_sk(sk); __be32 daddr; @@ -447,7 +447,7 @@ void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info) if (!rt) goto out; - net = rt->u.dst.dev->nd_net; + net = dev_net(rt->u.dst.dev); sk = icmp_sk(net); /* @@ -677,7 +677,7 @@ static void icmp_unreach(struct sk_buff *skb) u32 info = 0; struct net *net; - net = skb->dst->dev->nd_net; + net = dev_net(skb->dst->dev); /* * Incomplete header ? diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 6a4ee8da6994..682f632bfb77 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -130,12 +130,12 @@ */ #define IGMP_V1_SEEN(in_dev) \ - (IPV4_DEVCONF_ALL(in_dev->dev->nd_net, FORCE_IGMP_VERSION) == 1 || \ + (IPV4_DEVCONF_ALL(dev_net(in_dev->dev), FORCE_IGMP_VERSION) == 1 || \ IN_DEV_CONF_GET((in_dev), FORCE_IGMP_VERSION) == 1 || \ ((in_dev)->mr_v1_seen && \ time_before(jiffies, (in_dev)->mr_v1_seen))) #define IGMP_V2_SEEN(in_dev) \ - (IPV4_DEVCONF_ALL(in_dev->dev->nd_net, FORCE_IGMP_VERSION) == 2 || \ + (IPV4_DEVCONF_ALL(dev_net(in_dev->dev), FORCE_IGMP_VERSION) == 2 || \ IN_DEV_CONF_GET((in_dev), FORCE_IGMP_VERSION) == 2 || \ ((in_dev)->mr_v2_seen && \ time_before(jiffies, (in_dev)->mr_v2_seen))) @@ -1198,7 +1198,7 @@ void ip_mc_inc_group(struct in_device *in_dev, __be32 addr) ASSERT_RTNL(); - if (in_dev->dev->nd_net != &init_net) + if (dev_net(in_dev->dev) != &init_net) return; for (im=in_dev->mc_list; im; im=im->next) { @@ -1280,7 +1280,7 @@ void ip_mc_dec_group(struct in_device *in_dev, __be32 addr) ASSERT_RTNL(); - if (in_dev->dev->nd_net != &init_net) + if (dev_net(in_dev->dev) != &init_net) return; for (ip=&in_dev->mc_list; (i=*ip)!=NULL; ip=&i->next) { @@ -1310,7 +1310,7 @@ void ip_mc_down(struct in_device *in_dev) ASSERT_RTNL(); - if (in_dev->dev->nd_net != &init_net) + if (dev_net(in_dev->dev) != &init_net) return; for (i=in_dev->mc_list; i; i=i->next) @@ -1333,7 +1333,7 @@ void ip_mc_init_dev(struct in_device *in_dev) { ASSERT_RTNL(); - if (in_dev->dev->nd_net != &init_net) + if (dev_net(in_dev->dev) != &init_net) return; in_dev->mc_tomb = NULL; @@ -1359,7 +1359,7 @@ void ip_mc_up(struct in_device *in_dev) ASSERT_RTNL(); - if (in_dev->dev->nd_net != &init_net) + if (dev_net(in_dev->dev) != &init_net) return; ip_mc_inc_group(in_dev, IGMP_ALL_HOSTS); @@ -1378,7 +1378,7 @@ void ip_mc_destroy_dev(struct in_device *in_dev) ASSERT_RTNL(); - if (in_dev->dev->nd_net != &init_net) + if (dev_net(in_dev->dev) != &init_net) return; /* Deactivate timers */ diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c index 8b448c4b9080..fcb60e76b234 100644 --- a/net/ipv4/ip_fragment.c +++ b/net/ipv4/ip_fragment.c @@ -571,7 +571,7 @@ int ip_defrag(struct sk_buff *skb, u32 user) IP_INC_STATS_BH(IPSTATS_MIB_REASMREQDS); - net = skb->dev ? skb->dev->nd_net : skb->dst->dev->nd_net; + net = skb->dev ? dev_net(skb->dev) : dev_net(skb->dst->dev); /* Start by cleaning up the memory. */ if (atomic_read(&net->ipv4.frags.mem) > net->ipv4.frags.high_thresh) ip_evictor(net); diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index f9ee84420cb3..50972b397a9a 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -1190,7 +1190,7 @@ static int ipgre_close(struct net_device *dev) struct ip_tunnel *t = netdev_priv(dev); if (ipv4_is_multicast(t->parms.iph.daddr) && t->mlink) { struct in_device *in_dev; - in_dev = inetdev_by_index(dev->nd_net, t->mlink); + in_dev = inetdev_by_index(dev_net(dev), t->mlink); if (in_dev) { ip_mc_dec_group(in_dev, t->parms.iph.daddr); in_dev_put(in_dev); diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c index 2aeea5d15425..26685c83a146 100644 --- a/net/ipv4/ip_input.c +++ b/net/ipv4/ip_input.c @@ -172,7 +172,7 @@ int ip_call_ra_chain(struct sk_buff *skb) if (sk && inet_sk(sk)->num == protocol && (!sk->sk_bound_dev_if || sk->sk_bound_dev_if == dev->ifindex) && - sk->sk_net == dev->nd_net) { + sk->sk_net == dev_net(dev)) { if (ip_hdr(skb)->frag_off & htons(IP_MF | IP_OFFSET)) { if (ip_defrag(skb, IP_DEFRAG_CALL_RA_CHAIN)) { read_unlock(&ip_ra_lock); @@ -199,7 +199,7 @@ int ip_call_ra_chain(struct sk_buff *skb) static int ip_local_deliver_finish(struct sk_buff *skb) { - struct net *net = skb->dev->nd_net; + struct net *net = dev_net(skb->dev); __skb_pull(skb, ip_hdrlen(skb)); @@ -291,7 +291,7 @@ static inline int ip_rcv_options(struct sk_buff *skb) opt = &(IPCB(skb)->opt); opt->optlen = iph->ihl*4 - sizeof(struct iphdr); - if (ip_options_compile(dev->nd_net, opt, skb)) { + if (ip_options_compile(dev_net(dev), opt, skb)) { IP_INC_STATS_BH(IPSTATS_MIB_INHDRERRORS); goto drop; } diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index 87cc1222c600..d107543d3f81 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -145,7 +145,7 @@ int ip_options_echo(struct ip_options * dopt, struct sk_buff * skb) __be32 addr; memcpy(&addr, sptr+soffset-1, 4); - if (inet_addr_type(skb->dst->dev->nd_net, addr) != RTN_LOCAL) { + if (inet_addr_type(dev_net(skb->dst->dev), addr) != RTN_LOCAL) { dopt->ts_needtime = 1; soffset += 8; } diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c index 96138b128de8..08e8fb60d315 100644 --- a/net/ipv4/ipconfig.c +++ b/net/ipv4/ipconfig.c @@ -434,7 +434,7 @@ ic_rarp_recv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt unsigned char *sha, *tha; /* s for "source", t for "target" */ struct ic_device *d; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) @@ -854,7 +854,7 @@ static int __init ic_bootp_recv(struct sk_buff *skb, struct net_device *dev, str struct ic_device *d; int len, ext_len; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; /* Perform verifications before taking the lock. */ diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c index 7d63d74ef62a..e54bc1364473 100644 --- a/net/ipv4/ipmr.c +++ b/net/ipv4/ipmr.c @@ -1089,7 +1089,7 @@ static int ipmr_device_event(struct notifier_block *this, unsigned long event, v struct vif_device *v; int ct; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event != NETDEV_UNREGISTER) diff --git a/net/ipv4/netfilter/ip_queue.c b/net/ipv4/netfilter/ip_queue.c index fe05da41d6ba..500998a2dec1 100644 --- a/net/ipv4/netfilter/ip_queue.c +++ b/net/ipv4/netfilter/ip_queue.c @@ -481,7 +481,7 @@ ipq_rcv_dev_event(struct notifier_block *this, { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* Drop any packets associated with the downed device */ diff --git a/net/ipv4/netfilter/ipt_MASQUERADE.c b/net/ipv4/netfilter/ipt_MASQUERADE.c index c6817b18366a..84c26dd27d81 100644 --- a/net/ipv4/netfilter/ipt_MASQUERADE.c +++ b/net/ipv4/netfilter/ipt_MASQUERADE.c @@ -120,7 +120,7 @@ static int masq_device_event(struct notifier_block *this, { const struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event == NETDEV_DOWN) { diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index 3f68a937b602..8756d502a47f 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -168,7 +168,7 @@ static int raw_v4_input(struct sk_buff *skb, struct iphdr *iph, int hash) if (hlist_empty(head)) goto out; - net = skb->dev->nd_net; + net = dev_net(skb->dev); sk = __raw_v4_lookup(net, __sk_head(head), iph->protocol, iph->saddr, iph->daddr, skb->dev->ifindex); @@ -276,7 +276,7 @@ void raw_icmp_error(struct sk_buff *skb, int protocol, u32 info) raw_sk = sk_head(&raw_v4_hashinfo.ht[hash]); if (raw_sk != NULL) { iph = (struct iphdr *)skb->data; - net = skb->dev->nd_net; + net = dev_net(skb->dev); while ((raw_sk = __raw_v4_lookup(net, raw_sk, protocol, iph->daddr, iph->saddr, diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 2941ef21f203..7768d718e199 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -284,7 +284,7 @@ static struct rtable *rt_cache_get_first(struct rt_cache_iter_state *st) rcu_read_lock_bh(); r = rcu_dereference(rt_hash_table[st->bucket].chain); while (r) { - if (r->u.dst.dev->nd_net == st->p.net && + if (dev_net(r->u.dst.dev) == st->p.net && r->rt_genid == st->genid) return r; r = rcu_dereference(r->u.dst.rt_next); @@ -312,7 +312,7 @@ static struct rtable *rt_cache_get_next(struct rt_cache_iter_state *st, struct rtable *r) { while ((r = __rt_cache_get_next(st, r)) != NULL) { - if (r->u.dst.dev->nd_net != st->p.net) + if (dev_net(r->u.dst.dev) != st->p.net) continue; if (r->rt_genid == st->genid) break; @@ -680,7 +680,7 @@ static inline int compare_keys(struct flowi *fl1, struct flowi *fl2) static inline int compare_netns(struct rtable *rt1, struct rtable *rt2) { - return rt1->u.dst.dev->nd_net == rt2->u.dst.dev->nd_net; + return dev_net(rt1->u.dst.dev) == dev_net(rt2->u.dst.dev); } /* @@ -1164,7 +1164,7 @@ void ip_rt_redirect(__be32 old_gw, __be32 daddr, __be32 new_gw, if (!in_dev) return; - net = dev->nd_net; + net = dev_net(dev); if (new_gw == old_gw || !IN_DEV_RX_REDIRECTS(in_dev) || ipv4_is_multicast(new_gw) || ipv4_is_lbcast(new_gw) || ipv4_is_zeronet(new_gw)) @@ -1195,7 +1195,7 @@ void ip_rt_redirect(__be32 old_gw, __be32 daddr, __be32 new_gw, rth->fl.oif != ikeys[k] || rth->fl.iif != 0 || rth->rt_genid != atomic_read(&rt_genid) || - rth->u.dst.dev->nd_net != net) { + dev_net(rth->u.dst.dev) != net) { rthp = &rth->u.dst.rt_next; continue; } @@ -1454,7 +1454,7 @@ unsigned short ip_rt_frag_needed(struct net *net, struct iphdr *iph, rth->rt_src == iph->saddr && rth->fl.iif == 0 && !(dst_metric_locked(&rth->u.dst, RTAX_MTU)) && - rth->u.dst.dev->nd_net == net && + dev_net(rth->u.dst.dev) == net && rth->rt_genid == atomic_read(&rt_genid)) { unsigned short mtu = new_mtu; @@ -1530,9 +1530,9 @@ static void ipv4_dst_ifdown(struct dst_entry *dst, struct net_device *dev, { struct rtable *rt = (struct rtable *) dst; struct in_device *idev = rt->idev; - if (dev != dev->nd_net->loopback_dev && idev && idev->dev == dev) { + if (dev != dev_net(dev)->loopback_dev && idev && idev->dev == dev) { struct in_device *loopback_idev = - in_dev_get(dev->nd_net->loopback_dev); + in_dev_get(dev_net(dev)->loopback_dev); if (loopback_idev) { rt->idev = loopback_idev; in_dev_put(idev); @@ -1576,7 +1576,7 @@ void ip_rt_get_source(u8 *addr, struct rtable *rt) if (rt->fl.iif == 0) src = rt->rt_src; - else if (fib_lookup(rt->u.dst.dev->nd_net, &rt->fl, &res) == 0) { + else if (fib_lookup(dev_net(rt->u.dst.dev), &rt->fl, &res) == 0) { src = FIB_RES_PREFSRC(res); fib_res_put(&res); } else @@ -1900,7 +1900,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, __be32 spec_dst; int err = -EINVAL; int free_res = 0; - struct net * net = dev->nd_net; + struct net * net = dev_net(dev); /* IP on this device is disabled. */ @@ -2071,7 +2071,7 @@ int ip_route_input(struct sk_buff *skb, __be32 daddr, __be32 saddr, int iif = dev->ifindex; struct net *net; - net = dev->nd_net; + net = dev_net(dev); tos &= IPTOS_RT_MASK; hash = rt_hash(daddr, saddr, iif); @@ -2084,7 +2084,7 @@ int ip_route_input(struct sk_buff *skb, __be32 daddr, __be32 saddr, rth->fl.oif == 0 && rth->fl.mark == skb->mark && rth->fl.fl4_tos == tos && - rth->u.dst.dev->nd_net == net && + dev_net(rth->u.dst.dev) == net && rth->rt_genid == atomic_read(&rt_genid)) { dst_use(&rth->u.dst, jiffies); RT_CACHE_STAT_INC(in_hit); @@ -2486,7 +2486,7 @@ int __ip_route_output_key(struct net *net, struct rtable **rp, rth->fl.mark == flp->mark && !((rth->fl.fl4_tos ^ flp->fl4_tos) & (IPTOS_RT_MASK | RTO_ONLINK)) && - rth->u.dst.dev->nd_net == net && + dev_net(rth->u.dst.dev) == net && rth->rt_genid == atomic_read(&rt_genid)) { dst_use(&rth->u.dst, jiffies); RT_CACHE_STAT_INC(out_hit); @@ -2795,7 +2795,7 @@ int ip_rt_dump(struct sk_buff *skb, struct netlink_callback *cb) rcu_read_lock_bh(); for (rt = rcu_dereference(rt_hash_table[h].chain), idx = 0; rt; rt = rcu_dereference(rt->u.dst.rt_next), idx++) { - if (rt->u.dst.dev->nd_net != net || idx < s_idx) + if (dev_net(rt->u.dst.dev) != net || idx < s_idx) continue; if (rt->rt_genid != atomic_read(&rt_genid)) continue; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 649d00a50cb1..28bece6f281b 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -353,7 +353,7 @@ void tcp_v4_err(struct sk_buff *skb, u32 info) return; } - sk = inet_lookup(skb->dev->nd_net, &tcp_hashinfo, iph->daddr, th->dest, + sk = inet_lookup(dev_net(skb->dev), &tcp_hashinfo, iph->daddr, th->dest, iph->saddr, th->source, inet_iif(skb)); if (!sk) { ICMP_INC_STATS_BH(ICMP_MIB_INERRORS); @@ -1644,7 +1644,7 @@ int tcp_v4_rcv(struct sk_buff *skb) TCP_SKB_CB(skb)->flags = iph->tos; TCP_SKB_CB(skb)->sacked = 0; - sk = __inet_lookup(skb->dev->nd_net, &tcp_hashinfo, iph->saddr, + sk = __inet_lookup(dev_net(skb->dev), &tcp_hashinfo, iph->saddr, th->source, iph->daddr, th->dest, inet_iif(skb)); if (!sk) goto no_tcp_socket; @@ -1718,7 +1718,7 @@ do_time_wait: } switch (tcp_timewait_state_process(inet_twsk(sk), skb, th)) { case TCP_TW_SYN: { - struct sock *sk2 = inet_lookup_listener(skb->dev->nd_net, + struct sock *sk2 = inet_lookup_listener(dev_net(skb->dev), &tcp_hashinfo, iph->daddr, th->dest, inet_iif(skb)); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index b37581dfd029..e2cd93481359 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -357,7 +357,7 @@ void __udp4_lib_err(struct sk_buff *skb, u32 info, struct hlist_head udptable[]) int harderr; int err; - sk = __udp4_lib_lookup(skb->dev->nd_net, iph->daddr, uh->dest, + sk = __udp4_lib_lookup(dev_net(skb->dev), iph->daddr, uh->dest, iph->saddr, uh->source, skb->dev->ifindex, udptable); if (sk == NULL) { ICMP_INC_STATS_BH(ICMP_MIB_INERRORS); @@ -1181,7 +1181,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct hlist_head udptable[], if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST)) return __udp4_lib_mcast_deliver(skb, uh, saddr, daddr, udptable); - sk = __udp4_lib_lookup(skb->dev->nd_net, saddr, uh->source, daddr, + sk = __udp4_lib_lookup(dev_net(skb->dev), saddr, uh->source, daddr, uh->dest, inet_iif(skb), udptable); if (sk != NULL) { diff --git a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c index 10ed70491434..c63de0a72aba 100644 --- a/net/ipv4/xfrm4_policy.c +++ b/net/ipv4/xfrm4_policy.c @@ -221,7 +221,7 @@ static void xfrm4_dst_ifdown(struct dst_entry *dst, struct net_device *dev, xdst = (struct xfrm_dst *)dst; if (xdst->u.rt.idev->dev == dev) { struct in_device *loopback_idev = - in_dev_get(dev->nd_net->loopback_dev); + in_dev_get(dev_net(dev)->loopback_dev); BUG_ON(!loopback_idev); do { diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 89954885dee1..d1de9ec74261 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -335,7 +335,7 @@ static struct inet6_dev * ipv6_add_dev(struct net_device *dev) rwlock_init(&ndev->lock); ndev->dev = dev; - memcpy(&ndev->cnf, dev->nd_net->ipv6.devconf_dflt, sizeof(ndev->cnf)); + memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf)); ndev->cnf.mtu6 = dev->mtu; ndev->cnf.sysctl = NULL; ndev->nd_parms = neigh_parms_alloc(dev, &nd_tbl); @@ -561,7 +561,7 @@ ipv6_add_addr(struct inet6_dev *idev, const struct in6_addr *addr, int pfxlen, write_lock(&addrconf_hash_lock); /* Ignore adding duplicate addresses on an interface */ - if (ipv6_chk_same_addr(idev->dev->nd_net, addr, idev->dev)) { + if (ipv6_chk_same_addr(dev_net(idev->dev), addr, idev->dev)) { ADBG(("ipv6_add_addr: already assigned\n")); err = -EEXIST; goto out; @@ -751,7 +751,7 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp) if ((ifp->flags & IFA_F_PERMANENT) && onlink < 1) { struct in6_addr prefix; struct rt6_info *rt; - struct net *net = ifp->idev->dev->nd_net; + struct net *net = dev_net(ifp->idev->dev); ipv6_addr_prefix(&prefix, &ifp->addr, ifp->prefix_len); rt = rt6_lookup(net, &prefix, NULL, ifp->idev->dev->ifindex, 1); @@ -1044,7 +1044,7 @@ int ipv6_dev_get_saddr(struct net_device *dst_dev, { struct ipv6_saddr_score scores[2], *score = &scores[0], *hiscore = &scores[1]; - struct net *net = dst_dev->nd_net; + struct net *net = dev_net(dst_dev); struct ipv6_saddr_dst dst; struct net_device *dev; int dst_type; @@ -1217,7 +1217,7 @@ int ipv6_chk_addr(struct net *net, struct in6_addr *addr, read_lock_bh(&addrconf_hash_lock); for(ifp = inet6_addr_lst[hash]; ifp; ifp=ifp->lst_next) { - if (ifp->idev->dev->nd_net != net) + if (dev_net(ifp->idev->dev) != net) continue; if (ipv6_addr_equal(&ifp->addr, addr) && !(ifp->flags&IFA_F_TENTATIVE)) { @@ -1239,7 +1239,7 @@ int ipv6_chk_same_addr(struct net *net, const struct in6_addr *addr, u8 hash = ipv6_addr_hash(addr); for(ifp = inet6_addr_lst[hash]; ifp; ifp=ifp->lst_next) { - if (ifp->idev->dev->nd_net != net) + if (dev_net(ifp->idev->dev) != net) continue; if (ipv6_addr_equal(&ifp->addr, addr)) { if (dev == NULL || ifp->idev->dev == dev) @@ -1257,7 +1257,7 @@ struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net, struct in6_addr *addr, read_lock_bh(&addrconf_hash_lock); for(ifp = inet6_addr_lst[hash]; ifp; ifp=ifp->lst_next) { - if (ifp->idev->dev->nd_net != net) + if (dev_net(ifp->idev->dev) != net) continue; if (ipv6_addr_equal(&ifp->addr, addr)) { if (dev == NULL || ifp->idev->dev == dev || @@ -1559,7 +1559,7 @@ addrconf_prefix_route(struct in6_addr *pfx, int plen, struct net_device *dev, .fc_expires = expires, .fc_dst_len = plen, .fc_flags = RTF_UP | flags, - .fc_nlinfo.nl_net = dev->nd_net, + .fc_nlinfo.nl_net = dev_net(dev), }; ipv6_addr_copy(&cfg.fc_dst, pfx); @@ -1586,7 +1586,7 @@ static void addrconf_add_mroute(struct net_device *dev) .fc_ifindex = dev->ifindex, .fc_dst_len = 8, .fc_flags = RTF_UP, - .fc_nlinfo.nl_net = dev->nd_net, + .fc_nlinfo.nl_net = dev_net(dev), }; ipv6_addr_set(&cfg.fc_dst, htonl(0xFF000000), 0, 0, 0); @@ -1603,7 +1603,7 @@ static void sit_route_add(struct net_device *dev) .fc_ifindex = dev->ifindex, .fc_dst_len = 96, .fc_flags = RTF_UP | RTF_NONEXTHOP, - .fc_nlinfo.nl_net = dev->nd_net, + .fc_nlinfo.nl_net = dev_net(dev), }; /* prefix length - 96 bits "::d.d.d.d" */ @@ -1704,7 +1704,7 @@ void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len) if (pinfo->onlink) { struct rt6_info *rt; - rt = rt6_lookup(dev->nd_net, &pinfo->prefix, NULL, + rt = rt6_lookup(dev_net(dev), &pinfo->prefix, NULL, dev->ifindex, 1); if (rt && ((rt->rt6i_flags & (RTF_GATEWAY | RTF_DEFAULT)) == 0)) { @@ -1748,7 +1748,7 @@ void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len) ok: - ifp = ipv6_get_ifaddr(dev->nd_net, &addr, dev, 1); + ifp = ipv6_get_ifaddr(dev_net(dev), &addr, dev, 1); if (ifp == NULL && valid_lft) { int max_addresses = in6_dev->cnf.max_addresses; @@ -2071,7 +2071,7 @@ static void sit_add_v4_addrs(struct inet6_dev *idev) struct inet6_ifaddr * ifp; struct in6_addr addr; struct net_device *dev; - struct net *net = idev->dev->nd_net; + struct net *net = dev_net(idev->dev); int scope; ASSERT_RTNL(); @@ -2261,7 +2261,7 @@ ipv6_inherit_linklocal(struct inet6_dev *idev, struct net_device *link_dev) static void ip6_tnl_add_linklocal(struct inet6_dev *idev) { struct net_device *link_dev; - struct net *net = idev->dev->nd_net; + struct net *net = dev_net(idev->dev); /* first try to inherit the link-local address from the link device */ if (idev->dev->iflink && @@ -2442,7 +2442,7 @@ static int addrconf_ifdown(struct net_device *dev, int how) { struct inet6_dev *idev; struct inet6_ifaddr *ifa, **bifa; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); int i; ASSERT_RTNL(); @@ -2771,7 +2771,7 @@ static struct inet6_ifaddr *if6_get_first(struct seq_file *seq) for (state->bucket = 0; state->bucket < IN6_ADDR_HSIZE; ++state->bucket) { ifa = inet6_addr_lst[state->bucket]; - while (ifa && ifa->idev->dev->nd_net != net) + while (ifa && dev_net(ifa->idev->dev) != net) ifa = ifa->lst_next; if (ifa) break; @@ -2787,7 +2787,7 @@ static struct inet6_ifaddr *if6_get_next(struct seq_file *seq, struct inet6_ifad ifa = ifa->lst_next; try_again: if (ifa) { - if (ifa->idev->dev->nd_net != net) { + if (dev_net(ifa->idev->dev) != net) { ifa = ifa->lst_next; goto try_again; } @@ -2905,7 +2905,7 @@ int ipv6_chk_home_addr(struct net *net, struct in6_addr *addr) u8 hash = ipv6_addr_hash(addr); read_lock_bh(&addrconf_hash_lock); for (ifp = inet6_addr_lst[hash]; ifp; ifp = ifp->lst_next) { - if (ifp->idev->dev->nd_net != net) + if (dev_net(ifp->idev->dev) != net) continue; if (ipv6_addr_cmp(&ifp->addr, addr) == 0 && (ifp->flags & IFA_F_HOMEADDRESS)) { @@ -3469,7 +3469,7 @@ errout: static void inet6_ifa_notify(int event, struct inet6_ifaddr *ifa) { struct sk_buff *skb; - struct net *net = ifa->idev->dev->nd_net; + struct net *net = dev_net(ifa->idev->dev); int err = -ENOBUFS; skb = nlmsg_new(inet6_ifaddr_msgsize(), GFP_ATOMIC); @@ -3675,7 +3675,7 @@ cont: void inet6_ifinfo_notify(int event, struct inet6_dev *idev) { struct sk_buff *skb; - struct net *net = idev->dev->nd_net; + struct net *net = dev_net(idev->dev); int err = -ENOBUFS; skb = nlmsg_new(inet6_if_nlmsg_size(), GFP_ATOMIC); @@ -3745,7 +3745,7 @@ static void inet6_prefix_notify(int event, struct inet6_dev *idev, struct prefix_info *pinfo) { struct sk_buff *skb; - struct net *net = idev->dev->nd_net; + struct net *net = dev_net(idev->dev); int err = -ENOBUFS; skb = nlmsg_new(inet6_prefix_nlmsg_size(), GFP_ATOMIC); @@ -4157,7 +4157,7 @@ static void addrconf_sysctl_register(struct inet6_dev *idev) NET_IPV6_NEIGH, "ipv6", &ndisc_ifinfo_sysctl_change, NULL); - __addrconf_sysctl_register(idev->dev->nd_net, idev->dev->name, + __addrconf_sysctl_register(dev_net(idev->dev), idev->dev->name, idev->dev->ifindex, idev, &idev->cnf); } diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c index 86332417b402..50857662e6b7 100644 --- a/net/ipv6/icmp.c +++ b/net/ipv6/icmp.c @@ -306,7 +306,7 @@ static inline void mip6_addr_swap(struct sk_buff *skb) {} void icmpv6_send(struct sk_buff *skb, int type, int code, __u32 info, struct net_device *dev) { - struct net *net = skb->dev->nd_net; + struct net *net = dev_net(skb->dev); struct inet6_dev *idev = NULL; struct ipv6hdr *hdr = ipv6_hdr(skb); struct sock *sk; @@ -507,7 +507,7 @@ EXPORT_SYMBOL(icmpv6_send); static void icmpv6_echo_reply(struct sk_buff *skb) { - struct net *net = skb->dev->nd_net; + struct net *net = dev_net(skb->dev); struct sock *sk; struct inet6_dev *idev; struct ipv6_pinfo *np; diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index d34aa61353bb..556300f0eba5 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -402,7 +402,7 @@ int ip6_forward(struct sk_buff *skb) struct dst_entry *dst = skb->dst; struct ipv6hdr *hdr = ipv6_hdr(skb); struct inet6_skb_parm *opt = IP6CB(skb); - struct net *net = dst->dev->nd_net; + struct net *net = dev_net(dst->dev); if (ipv6_devconf.forwarding == 0) goto error; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 957ac7e9e929..0357de8e78c8 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -1400,7 +1400,7 @@ mld_scount(struct ifmcaddr6 *pmc, int type, int gdeleted, int sdeleted) static struct sk_buff *mld_newpack(struct net_device *dev, int size) { - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct sock *sk = net->ipv6.igmp_sk; struct sk_buff *skb; struct mld2_report *pmr; @@ -1448,7 +1448,7 @@ static void mld_sendpack(struct sk_buff *skb) (struct mld2_report *)skb_transport_header(skb); int payload_len, mldlen; struct inet6_dev *idev = in6_dev_get(skb->dev); - struct net *net = skb->dev->nd_net; + struct net *net = dev_net(skb->dev); int err; struct flowi fl; @@ -1762,7 +1762,7 @@ static void mld_send_cr(struct inet6_dev *idev) static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type) { - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct sock *sk = net->ipv6.igmp_sk; struct inet6_dev *idev; struct sk_buff *skb; diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c index 3f68a6eae7b2..79af57f586e8 100644 --- a/net/ipv6/ndisc.c +++ b/net/ipv6/ndisc.c @@ -447,7 +447,7 @@ static void __ndisc_send(struct net_device *dev, { struct flowi fl; struct dst_entry *dst; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct sock *sk = net->ipv6.ndisc_sk; struct sk_buff *skb; struct icmp6hdr *hdr; @@ -539,7 +539,7 @@ static void ndisc_send_na(struct net_device *dev, struct neighbour *neigh, }; /* for anycast or proxy, solicited_addr != src_addr */ - ifp = ipv6_get_ifaddr(dev->nd_net, solicited_addr, dev, 1); + ifp = ipv6_get_ifaddr(dev_net(dev), solicited_addr, dev, 1); if (ifp) { src_addr = solicited_addr; if (ifp->flags & IFA_F_OPTIMISTIC) @@ -547,7 +547,7 @@ static void ndisc_send_na(struct net_device *dev, struct neighbour *neigh, in6_ifa_put(ifp); } else { if (ipv6_dev_get_saddr(dev, daddr, - inet6_sk(dev->nd_net->ipv6.ndisc_sk)->srcprefs, + inet6_sk(dev_net(dev)->ipv6.ndisc_sk)->srcprefs, &tmpaddr)) return; src_addr = &tmpaddr; @@ -601,7 +601,7 @@ void ndisc_send_rs(struct net_device *dev, struct in6_addr *saddr, * suppress the inclusion of the sllao. */ if (send_sllao) { - struct inet6_ifaddr *ifp = ipv6_get_ifaddr(dev->nd_net, saddr, + struct inet6_ifaddr *ifp = ipv6_get_ifaddr(dev_net(dev), saddr, dev, 1); if (ifp) { if (ifp->flags & IFA_F_OPTIMISTIC) { @@ -639,7 +639,7 @@ static void ndisc_solicit(struct neighbour *neigh, struct sk_buff *skb) struct in6_addr *target = (struct in6_addr *)&neigh->primary_key; int probes = atomic_read(&neigh->probes); - if (skb && ipv6_chk_addr(dev->nd_net, &ipv6_hdr(skb)->saddr, dev, 1)) + if (skb && ipv6_chk_addr(dev_net(dev), &ipv6_hdr(skb)->saddr, dev, 1)) saddr = &ipv6_hdr(skb)->saddr; if ((probes -= neigh->parms->ucast_probes) < 0) { @@ -727,7 +727,7 @@ static void ndisc_recv_ns(struct sk_buff *skb) inc = ipv6_addr_is_multicast(daddr); - ifp = ipv6_get_ifaddr(dev->nd_net, &msg->target, dev, 1); + ifp = ipv6_get_ifaddr(dev_net(dev), &msg->target, dev, 1); if (ifp) { if (ifp->flags & (IFA_F_TENTATIVE|IFA_F_OPTIMISTIC)) { @@ -776,7 +776,7 @@ static void ndisc_recv_ns(struct sk_buff *skb) if (ipv6_chk_acast_addr(dev, &msg->target) || (idev->cnf.forwarding && (ipv6_devconf.proxy_ndp || idev->cnf.proxy_ndp) && - (pneigh = pneigh_lookup(&nd_tbl, dev->nd_net, + (pneigh = pneigh_lookup(&nd_tbl, dev_net(dev), &msg->target, dev, 0)) != NULL)) { if (!(NEIGH_CB(skb)->flags & LOCALLY_ENQUEUED) && skb->pkt_type != PACKET_HOST && @@ -886,7 +886,7 @@ static void ndisc_recv_na(struct sk_buff *skb) return; } } - ifp = ipv6_get_ifaddr(dev->nd_net, &msg->target, dev, 1); + ifp = ipv6_get_ifaddr(dev_net(dev), &msg->target, dev, 1); if (ifp) { if (ifp->flags & IFA_F_TENTATIVE) { addrconf_dad_failure(ifp); @@ -918,7 +918,7 @@ static void ndisc_recv_na(struct sk_buff *skb) */ if (lladdr && !memcmp(lladdr, dev->dev_addr, dev->addr_len) && ipv6_devconf.forwarding && ipv6_devconf.proxy_ndp && - pneigh_lookup(&nd_tbl, dev->nd_net, &msg->target, dev, 0)) { + pneigh_lookup(&nd_tbl, dev_net(dev), &msg->target, dev, 0)) { /* XXX: idev->cnf.prixy_ndp */ goto out; } @@ -1008,7 +1008,7 @@ static void ndisc_ra_useropt(struct sk_buff *ra, struct nd_opt_hdr *opt) struct sk_buff *skb; struct nlmsghdr *nlh; struct nduseroptmsg *ndmsg; - struct net *net = ra->dev->nd_net; + struct net *net = dev_net(ra->dev); int err; int base_size = NLMSG_ALIGN(sizeof(struct nduseroptmsg) + (opt->nd_opt_len << 3)); @@ -1395,7 +1395,7 @@ void ndisc_send_redirect(struct sk_buff *skb, struct neighbour *neigh, struct in6_addr *target) { struct net_device *dev = skb->dev; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct sock *sk = net->ipv6.ndisc_sk; int len = sizeof(struct icmp6hdr) + 2 * sizeof(struct in6_addr); struct sk_buff *buff; @@ -1597,7 +1597,7 @@ int ndisc_rcv(struct sk_buff *skb) static int ndisc_netdev_event(struct notifier_block *this, unsigned long event, void *ptr) { struct net_device *dev = ptr; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); switch (event) { case NETDEV_CHANGEADDR: diff --git a/net/ipv6/netfilter/ip6_queue.c b/net/ipv6/netfilter/ip6_queue.c index cc2f9afcf808..a6d30626b47c 100644 --- a/net/ipv6/netfilter/ip6_queue.c +++ b/net/ipv6/netfilter/ip6_queue.c @@ -484,7 +484,7 @@ ipq_rcv_dev_event(struct notifier_block *this, { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* Drop any packets associated with the downed device */ diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c index 8a5be290c710..364dc332532c 100644 --- a/net/ipv6/proc.c +++ b/net/ipv6/proc.c @@ -214,7 +214,7 @@ int snmp6_register_dev(struct inet6_dev *idev) if (!idev || !idev->dev) return -EINVAL; - if (idev->dev->nd_net != &init_net) + if (dev_net(idev->dev) != &init_net) return 0; if (!proc_net_devsnmp6) diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index 548d0763f4d3..efb0047f6880 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -176,7 +176,7 @@ static int ipv6_raw_deliver(struct sk_buff *skb, int nexthdr) if (sk == NULL) goto out; - net = skb->dev->nd_net; + net = dev_net(skb->dev); sk = __raw_v6_lookup(net, sk, nexthdr, daddr, saddr, IP6CB(skb)->iif); while (sk) { @@ -363,7 +363,7 @@ void raw6_icmp_error(struct sk_buff *skb, int nexthdr, if (sk != NULL) { saddr = &ipv6_hdr(skb)->saddr; daddr = &ipv6_hdr(skb)->daddr; - net = skb->dev->nd_net; + net = dev_net(skb->dev); while ((sk = __raw_v6_lookup(net, sk, nexthdr, saddr, daddr, IP6CB(skb)->iif))) { diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c index f936d045a39d..4e1447634f36 100644 --- a/net/ipv6/reassembly.c +++ b/net/ipv6/reassembly.c @@ -600,7 +600,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb) return 1; } - net = skb->dev->nd_net; + net = dev_net(skb->dev); if (atomic_read(&net->ipv6.frags.mem) > net->ipv6.frags.high_thresh) ip6_evictor(net, ip6_dst_idev(skb->dst)); diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 06faa46920e1..65053fba8c1a 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -208,7 +208,7 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev, struct rt6_info *rt = (struct rt6_info *)dst; struct inet6_dev *idev = rt->rt6i_idev; struct net_device *loopback_dev = - dev->nd_net->loopback_dev; + dev_net(dev)->loopback_dev; if (dev != loopback_dev && idev != NULL && idev->dev == dev) { struct inet6_dev *loopback_idev = @@ -433,7 +433,7 @@ static struct rt6_info *rt6_select(struct fib6_node *fn, int oif, int strict) RT6_TRACE("%s() => %p\n", __func__, match); - net = rt0->rt6i_dev->nd_net; + net = dev_net(rt0->rt6i_dev); return (match ? match : net->ipv6.ip6_null_entry); } @@ -441,7 +441,7 @@ static struct rt6_info *rt6_select(struct fib6_node *fn, int oif, int strict) int rt6_route_rcv(struct net_device *dev, u8 *opt, int len, struct in6_addr *gwaddr) { - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct route_info *rinfo = (struct route_info *) opt; struct in6_addr prefix_buf, *prefix; unsigned int pref; @@ -607,7 +607,7 @@ static int __ip6_ins_rt(struct rt6_info *rt, struct nl_info *info) int ip6_ins_rt(struct rt6_info *rt) { struct nl_info info = { - .nl_net = rt->rt6i_dev->nd_net, + .nl_net = dev_net(rt->rt6i_dev), }; return __ip6_ins_rt(rt, &info); } @@ -745,7 +745,7 @@ static struct rt6_info *ip6_pol_route_input(struct net *net, struct fib6_table * void ip6_route_input(struct sk_buff *skb) { struct ipv6hdr *iph = ipv6_hdr(skb); - struct net *net = skb->dev->nd_net; + struct net *net = dev_net(skb->dev); int flags = RT6_LOOKUP_F_HAS_SADDR; struct flowi fl = { .iif = skb->dev->ifindex, @@ -928,7 +928,7 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev, { struct rt6_info *rt; struct inet6_dev *idev = in6_dev_get(dev); - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); if (unlikely(idev == NULL)) return NULL; @@ -1252,7 +1252,7 @@ install_route: rt->rt6i_idev = idev; rt->rt6i_table = table; - cfg->fc_nlinfo.nl_net = dev->nd_net; + cfg->fc_nlinfo.nl_net = dev_net(dev); return __ip6_ins_rt(rt, &cfg->fc_nlinfo); @@ -1270,7 +1270,7 @@ static int __ip6_del_rt(struct rt6_info *rt, struct nl_info *info) { int err; struct fib6_table *table; - struct net *net = rt->rt6i_dev->nd_net; + struct net *net = dev_net(rt->rt6i_dev); if (rt == net->ipv6.ip6_null_entry) return -ENOENT; @@ -1289,7 +1289,7 @@ static int __ip6_del_rt(struct rt6_info *rt, struct nl_info *info) int ip6_del_rt(struct rt6_info *rt) { struct nl_info info = { - .nl_net = rt->rt6i_dev->nd_net, + .nl_net = dev_net(rt->rt6i_dev), }; return __ip6_del_rt(rt, &info); } @@ -1401,7 +1401,7 @@ static struct rt6_info *ip6_route_redirect(struct in6_addr *dest, struct net_device *dev) { int flags = RT6_LOOKUP_F_HAS_SADDR; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); struct ip6rd_flowi rdfl = { .fl = { .oif = dev->ifindex, @@ -1428,7 +1428,7 @@ void rt6_redirect(struct in6_addr *dest, struct in6_addr *src, { struct rt6_info *rt, *nrt = NULL; struct netevent_redirect netevent; - struct net *net = neigh->dev->nd_net; + struct net *net = dev_net(neigh->dev); rt = ip6_route_redirect(dest, src, saddr, neigh->dev); @@ -1477,7 +1477,7 @@ void rt6_redirect(struct in6_addr *dest, struct in6_addr *src, nrt->rt6i_nexthop = neigh_clone(neigh); /* Reset pmtu, it may be better */ nrt->u.dst.metrics[RTAX_MTU-1] = ipv6_get_mtu(neigh->dev); - nrt->u.dst.metrics[RTAX_ADVMSS-1] = ipv6_advmss(neigh->dev->nd_net, + nrt->u.dst.metrics[RTAX_ADVMSS-1] = ipv6_advmss(dev_net(neigh->dev), dst_mtu(&nrt->u.dst)); if (ip6_ins_rt(nrt)) @@ -1506,7 +1506,7 @@ void rt6_pmtu_discovery(struct in6_addr *daddr, struct in6_addr *saddr, struct net_device *dev, u32 pmtu) { struct rt6_info *rt, *nrt; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); int allfrag = 0; rt = rt6_lookup(net, daddr, saddr, dev->ifindex, 0); @@ -1583,7 +1583,7 @@ out: static struct rt6_info * ip6_rt_copy(struct rt6_info *ort) { - struct net *net = ort->rt6i_dev->nd_net; + struct net *net = dev_net(ort->rt6i_dev); struct rt6_info *rt = ip6_dst_alloc(net->ipv6.ip6_dst_ops); if (rt) { @@ -1682,7 +1682,7 @@ struct rt6_info *rt6_get_dflt_router(struct in6_addr *addr, struct net_device *d struct rt6_info *rt; struct fib6_table *table; - table = fib6_get_table(dev->nd_net, RT6_TABLE_DFLT); + table = fib6_get_table(dev_net(dev), RT6_TABLE_DFLT); if (table == NULL) return NULL; @@ -1713,7 +1713,7 @@ struct rt6_info *rt6_add_dflt_router(struct in6_addr *gwaddr, RTF_UP | RTF_EXPIRES | RTF_PREF(pref), .fc_nlinfo.pid = 0, .fc_nlinfo.nlh = NULL, - .fc_nlinfo.nl_net = dev->nd_net, + .fc_nlinfo.nl_net = dev_net(dev), }; ipv6_addr_copy(&cfg.fc_gateway, gwaddr); @@ -1862,7 +1862,7 @@ struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev, const struct in6_addr *addr, int anycast) { - struct net *net = idev->dev->nd_net; + struct net *net = dev_net(idev->dev); struct rt6_info *rt = ip6_dst_alloc(net->ipv6.ip6_dst_ops); if (rt == NULL) @@ -1939,7 +1939,7 @@ static int rt6_mtu_change_route(struct rt6_info *rt, void *p_arg) { struct rt6_mtu_change_arg *arg = (struct rt6_mtu_change_arg *) p_arg; struct inet6_dev *idev; - struct net *net = arg->dev->nd_net; + struct net *net = dev_net(arg->dev); /* In IPv6 pmtu discovery is not optional, so that RTAX_MTU lock cannot disable it. @@ -1983,7 +1983,7 @@ void rt6_mtu_change(struct net_device *dev, unsigned mtu) .mtu = mtu, }; - fib6_clean_all(dev->nd_net, rt6_mtu_change_route, 0, &arg); + fib6_clean_all(dev_net(dev), rt6_mtu_change_route, 0, &arg); } static const struct nla_policy rtm_ipv6_policy[RTA_MAX+1] = { @@ -2321,7 +2321,7 @@ static int ip6_route_dev_notify(struct notifier_block *this, unsigned long event, void *data) { struct net_device *dev = (struct net_device *)data; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); if (event == NETDEV_REGISTER && (dev->flags & IFF_LOOPBACK)) { net->ipv6.ip6_null_entry->u.dst.dev = dev; diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 8dd72966ff78..086deffff9c9 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -321,7 +321,7 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, struct tcp_sock *tp; __u32 seq; - sk = inet6_lookup(skb->dev->nd_net, &tcp_hashinfo, &hdr->daddr, + sk = inet6_lookup(dev_net(skb->dev), &tcp_hashinfo, &hdr->daddr, th->dest, &hdr->saddr, th->source, skb->dev->ifindex); if (sk == NULL) { @@ -988,7 +988,7 @@ static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb) struct tcphdr *th = tcp_hdr(skb), *t1; struct sk_buff *buff; struct flowi fl; - struct net *net = skb->dst->dev->nd_net; + struct net *net = dev_net(skb->dst->dev); struct sock *ctl_sk = net->ipv6.tcp_sk; unsigned int tot_len = sizeof(*th); #ifdef CONFIG_TCP_MD5SIG @@ -1093,7 +1093,7 @@ static void tcp_v6_send_ack(struct tcp_timewait_sock *tw, struct tcphdr *th = tcp_hdr(skb), *t1; struct sk_buff *buff; struct flowi fl; - struct net *net = skb->dev->nd_net; + struct net *net = dev_net(skb->dev); struct sock *ctl_sk = net->ipv6.tcp_sk; unsigned int tot_len = sizeof(struct tcphdr); __be32 *topt; @@ -1739,7 +1739,7 @@ static int tcp_v6_rcv(struct sk_buff *skb) TCP_SKB_CB(skb)->flags = ipv6_get_dsfield(ipv6_hdr(skb)); TCP_SKB_CB(skb)->sacked = 0; - sk = __inet6_lookup(skb->dev->nd_net, &tcp_hashinfo, + sk = __inet6_lookup(dev_net(skb->dev), &tcp_hashinfo, &ipv6_hdr(skb)->saddr, th->source, &ipv6_hdr(skb)->daddr, ntohs(th->dest), inet6_iif(skb)); @@ -1822,7 +1822,7 @@ do_time_wait: { struct sock *sk2; - sk2 = inet6_lookup_listener(skb->dev->nd_net, &tcp_hashinfo, + sk2 = inet6_lookup_listener(dev_net(skb->dev), &tcp_hashinfo, &ipv6_hdr(skb)->daddr, ntohs(th->dest), inet6_iif(skb)); if (sk2 != NULL) { diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 593d3efadaf9..6683c04b427e 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -235,7 +235,7 @@ void __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt, struct sock *sk; int err; - sk = __udp6_lib_lookup(skb->dev->nd_net, daddr, uh->dest, + sk = __udp6_lib_lookup(dev_net(skb->dev), daddr, uh->dest, saddr, uh->source, inet6_iif(skb), udptable); if (sk == NULL) return; @@ -483,7 +483,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct hlist_head udptable[], * check socket cache ... must talk to Alan about his plans * for sock caches... i'll skip this for now. */ - sk = __udp6_lib_lookup(skb->dev->nd_net, saddr, uh->source, + sk = __udp6_lib_lookup(dev_net(skb->dev), saddr, uh->source, daddr, uh->dest, inet6_iif(skb), udptable); if (sk == NULL) { diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c index d92d1fceb8cf..8f1e0543b3c4 100644 --- a/net/ipv6/xfrm6_policy.c +++ b/net/ipv6/xfrm6_policy.c @@ -247,7 +247,7 @@ static void xfrm6_dst_ifdown(struct dst_entry *dst, struct net_device *dev, xdst = (struct xfrm_dst *)dst; if (xdst->u.rt6.rt6i_idev->dev == dev) { struct inet6_dev *loopback_idev = - in6_dev_get(dev->nd_net->loopback_dev); + in6_dev_get(dev_net(dev)->loopback_dev); BUG_ON(!loopback_idev); do { diff --git a/net/ipx/af_ipx.c b/net/ipx/af_ipx.c index c76a9523091b..81ae8735f5e3 100644 --- a/net/ipx/af_ipx.c +++ b/net/ipx/af_ipx.c @@ -335,7 +335,7 @@ static int ipxitf_device_event(struct notifier_block *notifier, struct net_device *dev = ptr; struct ipx_interface *i, *tmp; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event != NETDEV_DOWN && event != NETDEV_UP) @@ -1636,7 +1636,7 @@ static int ipx_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_ty u16 ipx_pktsize; int rc = 0; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; /* Not ours */ diff --git a/net/irda/irlap_frame.c b/net/irda/irlap_frame.c index a38b231c8689..90894534f3cc 100644 --- a/net/irda/irlap_frame.c +++ b/net/irda/irlap_frame.c @@ -1326,7 +1326,7 @@ int irlap_driver_rcv(struct sk_buff *skb, struct net_device *dev, int command; __u8 control; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto out; /* FIXME: should we get our own field? */ diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c index b9143d2a04e1..a69c5c427fe3 100644 --- a/net/llc/llc_input.c +++ b/net/llc/llc_input.c @@ -146,7 +146,7 @@ int llc_rcv(struct sk_buff *skb, struct net_device *dev, int (*rcv)(struct sk_buff *, struct net_device *, struct packet_type *, struct net_device *); - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; /* diff --git a/net/netfilter/core.c b/net/netfilter/core.c index ec05684c56d7..292fa28146fb 100644 --- a/net/netfilter/core.c +++ b/net/netfilter/core.c @@ -168,7 +168,7 @@ int nf_hook_slow(int pf, unsigned int hook, struct sk_buff *skb, #ifdef CONFIG_NET_NS struct net *net; - net = indev == NULL ? outdev->nd_net : indev->nd_net; + net = indev == NULL ? dev_net(outdev) : dev_net(indev); if (net != &init_net) return 1; #endif diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c index 012cb6910820..81fb048add88 100644 --- a/net/netfilter/nfnetlink_queue.c +++ b/net/netfilter/nfnetlink_queue.c @@ -557,7 +557,7 @@ nfqnl_rcv_dev_event(struct notifier_block *this, { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* Drop any packets associated with the downed device */ diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c index 4478f2f6079d..a547c6320eb3 100644 --- a/net/netlabel/netlabel_unlabeled.c +++ b/net/netlabel/netlabel_unlabeled.c @@ -954,7 +954,7 @@ static int netlbl_unlhsh_netdev_handler(struct notifier_block *this, struct net_device *dev = ptr; struct netlbl_unlhsh_iface *iface = NULL; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; /* XXX - should this be a check for NETDEV_DOWN or _UNREGISTER? */ diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c index 972250c974f1..a270ebf9f765 100644 --- a/net/netrom/af_netrom.c +++ b/net/netrom/af_netrom.c @@ -106,7 +106,7 @@ static int nr_device_event(struct notifier_block *this, unsigned long event, voi { struct net_device *dev = (struct net_device *)ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event != NETDEV_DOWN) diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index a56ed2120e07..baa290d3444a 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -263,7 +263,7 @@ static int packet_rcv_spkt(struct sk_buff *skb, struct net_device *dev, struct if (skb->pkt_type == PACKET_LOOPBACK) goto out; - if (dev->nd_net != sk->sk_net) + if (dev_net(dev) != sk->sk_net) goto out; if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) @@ -451,7 +451,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev, struct packet sk = pt->af_packet_priv; po = pkt_sk(sk); - if (dev->nd_net != sk->sk_net) + if (dev_net(dev) != sk->sk_net) goto drop; skb->dev = dev; @@ -568,7 +568,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, struct packe sk = pt->af_packet_priv; po = pkt_sk(sk); - if (dev->nd_net != sk->sk_net) + if (dev_net(dev) != sk->sk_net) goto drop; if (dev->header_ops) { @@ -1450,7 +1450,7 @@ static int packet_notifier(struct notifier_block *this, unsigned long msg, void struct sock *sk; struct hlist_node *node; struct net_device *dev = data; - struct net *net = dev->nd_net; + struct net *net = dev_net(dev); read_lock(&net->packet.sklist_lock); sk_for_each(sk, node, &net->packet.sklist) { diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c index 4a31a81059ab..1a7f143cf741 100644 --- a/net/rose/af_rose.c +++ b/net/rose/af_rose.c @@ -197,7 +197,7 @@ static int rose_device_event(struct notifier_block *this, unsigned long event, { struct net_device *dev = (struct net_device *)ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event != NETDEV_DOWN) diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index beea2fb18b15..2faa0d8839eb 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -630,7 +630,7 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev, struct sctp_sockaddr_entry *temp; int found = 0; - if (ifa->ifa_dev->dev->nd_net != &init_net) + if (dev_net(ifa->ifa_dev->dev) != &init_net) return NOTIFY_DONE; switch (ev) { diff --git a/net/tipc/eth_media.c b/net/tipc/eth_media.c index 3bbef2ab22ae..9cd35eec3e7f 100644 --- a/net/tipc/eth_media.c +++ b/net/tipc/eth_media.c @@ -101,7 +101,7 @@ static int recv_msg(struct sk_buff *buf, struct net_device *dev, struct eth_bearer *eb_ptr = (struct eth_bearer *)pt->af_packet_priv; u32 size; - if (dev->nd_net != &init_net) { + if (dev_net(dev) != &init_net) { kfree_skb(buf); return 0; } @@ -198,7 +198,7 @@ static int recv_notification(struct notifier_block *nb, unsigned long evt, struct eth_bearer *eb_ptr = ð_bearers[0]; struct eth_bearer *stop = ð_bearers[MAX_ETH_BEARERS]; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; while ((eb_ptr->dev != dev)) { diff --git a/net/wireless/wext.c b/net/wireless/wext.c index 2c569b63e7d8..947188a5b937 100644 --- a/net/wireless/wext.c +++ b/net/wireless/wext.c @@ -1157,7 +1157,7 @@ static void rtmsg_iwinfo(struct net_device *dev, char *event, int event_len) struct sk_buff *skb; int err; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return; skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c index 339ca4a8e89e..7a46ea73fe2d 100644 --- a/net/x25/af_x25.c +++ b/net/x25/af_x25.c @@ -191,7 +191,7 @@ static int x25_device_event(struct notifier_block *this, unsigned long event, struct net_device *dev = ptr; struct x25_neigh *nb; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (dev->type == ARPHRD_X25 diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c index f0679d283110..3ff206c0ae94 100644 --- a/net/x25/x25_dev.c +++ b/net/x25/x25_dev.c @@ -95,7 +95,7 @@ int x25_lapb_receive_frame(struct sk_buff *skb, struct net_device *dev, struct sk_buff *nskb; struct x25_neigh *nb; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) goto drop; nskb = skb_copy(skb, GFP_ATOMIC); diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 8e588f20c60c..15d73e47cc2c 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -2079,7 +2079,7 @@ static int stale_bundle(struct dst_entry *dst) void xfrm_dst_ifdown(struct dst_entry *dst, struct net_device *dev) { while ((dst = dst->child) && dst->xfrm && dst->dev == dev) { - dst->dev = dev->nd_net->loopback_dev; + dst->dev = dev_net(dev)->loopback_dev; dev_hold(dst->dev); dev_put(dev); } @@ -2350,7 +2350,7 @@ static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; switch (event) { diff --git a/security/selinux/netif.c b/security/selinux/netif.c index 013d3117a86b..9c8a82aa8baf 100644 --- a/security/selinux/netif.c +++ b/security/selinux/netif.c @@ -281,7 +281,7 @@ static int sel_netif_netdev_notifier_handler(struct notifier_block *this, { struct net_device *dev = ptr; - if (dev->nd_net != &init_net) + if (dev_net(dev) != &init_net) return NOTIFY_DONE; if (event == NETDEV_DOWN) -- cgit v1.2.3-59-g8ed1b From 74b2e047ecda7a82c3327a0d0bb45ee2ccf301ca Mon Sep 17 00:00:00 2001 From: Christof Schmitt Date: Mon, 3 Mar 2008 12:19:28 +0100 Subject: [SCSI] zfcp: convert zfcp to use target reset and device reset handler [based on proposal from Mike Christie , this patch adds some simplifications to the handler functions] With the new target reset handler callback in the SCSI midlayer, the device reset handler in zfcp can be split in two parts. Now, zfcp does not have to track anymore whether the device supports LUN resets, so remove this flag and let the SCSI midlayer decide what to do. The device reset handler simply issues a LUN reset and the target reset handler a target reset. Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_def.h | 1 - drivers/s390/scsi/zfcp_scsi.c | 64 ++++++++++++++----------------------------- 2 files changed, 20 insertions(+), 45 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 9e9f6c1e4e5d..662c70f537ec 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -634,7 +634,6 @@ do { \ ZFCP_STATUS_PORT_NO_SCSI_ID) /* logical unit status */ -#define ZFCP_STATUS_UNIT_NOTSUPPUNITRESET 0x00000001 #define ZFCP_STATUS_UNIT_TEMPORARY 0x00000002 #define ZFCP_STATUS_UNIT_SHARED 0x00000004 #define ZFCP_STATUS_UNIT_READONLY 0x00000008 diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c index b9daf5c05862..ff97a61ad964 100644 --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -31,6 +31,7 @@ static int zfcp_scsi_queuecommand(struct scsi_cmnd *, void (*done) (struct scsi_cmnd *)); static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *); static int zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *); +static int zfcp_scsi_eh_target_reset_handler(struct scsi_cmnd *); static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *); static int zfcp_task_management_function(struct zfcp_unit *, u8, struct scsi_cmnd *); @@ -51,6 +52,7 @@ struct zfcp_data zfcp_data = { .queuecommand = zfcp_scsi_queuecommand, .eh_abort_handler = zfcp_scsi_eh_abort_handler, .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler, + .eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler, .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler, .can_queue = 4096, .this_id = -1, @@ -442,58 +444,32 @@ static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt) return retval; } -static int -zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *scpnt) +static int zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *scpnt) { int retval; - struct zfcp_unit *unit = (struct zfcp_unit *) scpnt->device->hostdata; + struct zfcp_unit *unit = scpnt->device->hostdata; if (!unit) { - ZFCP_LOG_NORMAL("bug: Tried reset for nonexistent unit\n"); - retval = SUCCESS; - goto out; + WARN_ON(1); + return SUCCESS; } - ZFCP_LOG_NORMAL("resetting unit 0x%016Lx on port 0x%016Lx, adapter %s\n", - unit->fcp_lun, unit->port->wwpn, - zfcp_get_busid_by_adapter(unit->port->adapter)); + retval = zfcp_task_management_function(unit, + FCP_LOGICAL_UNIT_RESET, + scpnt); + return retval ? FAILED : SUCCESS; +} - /* - * If we do not know whether the unit supports 'logical unit reset' - * then try 'logical unit reset' and proceed with 'target reset' - * if 'logical unit reset' fails. - * If the unit is known not to support 'logical unit reset' then - * skip 'logical unit reset' and try 'target reset' immediately. - */ - if (!atomic_test_mask(ZFCP_STATUS_UNIT_NOTSUPPUNITRESET, - &unit->status)) { - retval = zfcp_task_management_function(unit, - FCP_LOGICAL_UNIT_RESET, - scpnt); - if (retval) { - ZFCP_LOG_DEBUG("unit reset failed (unit=%p)\n", unit); - if (retval == -ENOTSUPP) - atomic_set_mask - (ZFCP_STATUS_UNIT_NOTSUPPUNITRESET, - &unit->status); - /* fall through and try 'target reset' next */ - } else { - ZFCP_LOG_DEBUG("unit reset succeeded (unit=%p)\n", - unit); - /* avoid 'target reset' */ - retval = SUCCESS; - goto out; - } +static int zfcp_scsi_eh_target_reset_handler(struct scsi_cmnd *scpnt) +{ + int retval; + struct zfcp_unit *unit = scpnt->device->hostdata; + + if (!unit) { + WARN_ON(1); + return SUCCESS; } retval = zfcp_task_management_function(unit, FCP_TARGET_RESET, scpnt); - if (retval) { - ZFCP_LOG_DEBUG("target reset failed (unit=%p)\n", unit); - retval = FAILED; - } else { - ZFCP_LOG_DEBUG("target reset succeeded (unit=%p)\n", unit); - retval = SUCCESS; - } - out: - return retval; + return retval ? FAILED : SUCCESS; } static int -- cgit v1.2.3-59-g8ed1b From 5c815d1501a9ce84578cb3ec64c9d31ef91e3de2 Mon Sep 17 00:00:00 2001 From: Christof Schmitt Date: Mon, 10 Mar 2008 16:18:54 +0100 Subject: [SCSI] zfcp: Fix handling for boxed port after physical close When a FSF physical close returns the status boxed, this means that another system already closed the port. For our system this is the same status as in the good path, we have to send the normal close. So, set the status for the boxed response to the same as for the good status. Signed-off-by: Christof Schmitt Signed-off-by: Martin Peschke Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_fsf.c | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 0dff05840ee2..2ed3c7b48882 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -2968,6 +2968,13 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) zfcp_erp_port_boxed(port); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; + + /* can't use generic zfcp_erp_modify_port_status because + * ZFCP_STATUS_COMMON_OPEN must not be reset for the port */ + atomic_clear_mask(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status); + list_for_each_entry(unit, &port->unit_list_head, list) + atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN, + &unit->status); break; case FSF_ADAPTER_STATUS_AVAILABLE: -- cgit v1.2.3-59-g8ed1b From c15450e33d198334291d50b5a95337c6b90cdab0 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:21:55 +0100 Subject: [SCSI] zfcp: Introduce a helper function that dumps hex data to a zfcp trace. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 701046c9bb33..0faadb0cda24 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -31,6 +31,24 @@ MODULE_PARM_DESC(dbfsize, #define ZFCP_LOG_AREA ZFCP_LOG_AREA_OTHER +static void zfcp_dbf_hexdump(debug_info_t *dbf, void *to, int to_len, + int level, char *from, int from_len) +{ + int offset; + struct zfcp_dbf_dump *dump = to; + int room = to_len - sizeof(*dump); + + for (offset = 0; offset < from_len; offset += dump->size) { + memset(to, 0, to_len); + strncpy(dump->tag, "dump", ZFCP_DBF_TAG_SIZE); + dump->total_size = from_len; + dump->offset = offset; + dump->size = min(from_len - offset, room); + memcpy(dump->data, from + offset, dump->size); + debug_event(dbf, level, dump, dump->size); + } +} + static int zfcp_dbf_stck(char *out_buf, const char *label, unsigned long long stck) { -- cgit v1.2.3-59-g8ed1b From 0f65e951ee0c4a7506c6c0489b59a6fb1d2f0e75 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:21:56 +0100 Subject: [SCSI] zfcp: Clean up _zfcp_san_dbf_event_common_els Clean up _zfcp_san_dbf_event_common_els using zfcp_dbf_hexdump() helper. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 39 ++++++++++----------------------------- 1 file changed, 10 insertions(+), 29 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 0faadb0cda24..5019d7b738bc 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -562,38 +562,19 @@ _zfcp_san_dbf_event_common_els(const char *tag, int level, { struct zfcp_adapter *adapter = fsf_req->adapter; struct zfcp_san_dbf_record *rec = &adapter->san_dbf_buf; - struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)rec; unsigned long flags; - int offset = 0; spin_lock_irqsave(&adapter->san_dbf_lock, flags); - do { - memset(rec, 0, sizeof(struct zfcp_san_dbf_record)); - if (offset == 0) { - strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); - rec->fsf_reqid = (unsigned long)fsf_req; - rec->fsf_seqno = fsf_req->seq_no; - rec->s_id = s_id; - rec->d_id = d_id; - rec->type.els.ls_code = ls_code; - buflen = min(buflen, ZFCP_DBF_ELS_MAX_PAYLOAD); - rec->type.els.payload_size = buflen; - memcpy(rec->type.els.payload, - buffer, min(buflen, ZFCP_DBF_ELS_PAYLOAD)); - offset += min(buflen, ZFCP_DBF_ELS_PAYLOAD); - } else { - strncpy(dump->tag, "dump", ZFCP_DBF_TAG_SIZE); - dump->total_size = buflen; - dump->offset = offset; - dump->size = min(buflen - offset, - (int)sizeof(struct zfcp_san_dbf_record) - - (int)sizeof(struct zfcp_dbf_dump)); - memcpy(dump->data, buffer + offset, dump->size); - offset += dump->size; - } - debug_event(adapter->san_dbf, level, - rec, sizeof(struct zfcp_san_dbf_record)); - } while (offset < buflen); + memset(rec, 0, sizeof(struct zfcp_san_dbf_record)); + strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); + rec->fsf_reqid = (unsigned long)fsf_req; + rec->fsf_seqno = fsf_req->seq_no; + rec->s_id = s_id; + rec->d_id = d_id; + rec->type.els.ls_code = ls_code; + debug_event(adapter->san_dbf, level, rec, sizeof(*rec)); + zfcp_dbf_hexdump(adapter->san_dbf, rec, sizeof(*rec), level, + buffer, min(buflen, ZFCP_DBF_ELS_MAX_PAYLOAD)); spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); } -- cgit v1.2.3-59-g8ed1b From 07c70d26b556b342e7ad285963974808efba3104 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:21:57 +0100 Subject: [SCSI] zfcp: Remove qtcb dump to kernel log Is not appropriate to printk() tons of hardware trace data. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_fsf.c | 31 ------------------------------- 1 file changed, 31 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 2ed3c7b48882..264f5f1bdde6 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -284,37 +284,6 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) goto skip_protstatus; } - /* log additional information provided by FSF (if any) */ - if (likely(qtcb->header.log_length)) { - /* do not trust them ;-) */ - if (unlikely(qtcb->header.log_start > - sizeof(struct fsf_qtcb))) { - ZFCP_LOG_NORMAL - ("bug: ULP (FSF logging) log data starts " - "beyond end of packet header. Ignored. " - "(start=%i, size=%li)\n", - qtcb->header.log_start, - sizeof(struct fsf_qtcb)); - goto forget_log; - } - if (unlikely((size_t) (qtcb->header.log_start + - qtcb->header.log_length) > - sizeof(struct fsf_qtcb))) { - ZFCP_LOG_NORMAL("bug: ULP (FSF logging) log data ends " - "beyond end of packet header. Ignored. " - "(start=%i, length=%i, size=%li)\n", - qtcb->header.log_start, - qtcb->header.log_length, - sizeof(struct fsf_qtcb)); - goto forget_log; - } - ZFCP_LOG_TRACE("ULP log data: \n"); - ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_TRACE, - (char *) qtcb + qtcb->header.log_start, - qtcb->header.log_length); - } - forget_log: - /* evaluate FSF Protocol Status */ switch (qtcb->prefix.prot_status) { -- cgit v1.2.3-59-g8ed1b From b75db73159ccffaf60a67896fdfed3856b1f65e3 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:21:58 +0100 Subject: [SCSI] zfcp: Add qtcb dump to hba debug trace This patch adds per request hardware debugging data to the trace record which is written per request. It's a replacement for some sad kernel message based debugging code. Considering the amount of trace data, printk() is not suitable for this stuff. Writing binary traces is more efficient. In addition we got all information in one place. The QTCB trace data is only dumped for requests other than SCSI requests. Otherwise we would flood the trace ring buffer. We are mostly interested in non-SCSI, recovery related requests here anyway. This patch also works around a known hardware bug. It truncates QTCB traces so that we do not save unused areas of the hardware trace. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 5019d7b738bc..658053c74707 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -179,6 +179,9 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) (fsf_req->fsf_command == FSF_QTCB_OPEN_LUN)) { strncpy(rec->tag2, "open", ZFCP_DBF_TAG_SIZE); level = 4; + } else if (qtcb->header.log_length) { + strncpy(rec->tag2, "qtcb", ZFCP_DBF_TAG_SIZE); + level = 5; } else { strncpy(rec->tag2, "norm", ZFCP_DBF_TAG_SIZE); level = 6; @@ -250,6 +253,17 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) debug_event(adapter->hba_dbf, level, rec, sizeof(struct zfcp_hba_dbf_record)); + + /* have fcp channel microcode fixed to use as little as possible */ + if (fsf_req->fsf_command != FSF_QTCB_FCP_CMND) { + /* adjust length skipping trailing zeros */ + char *buf = (char *)qtcb + qtcb->header.log_start; + int len = qtcb->header.log_length; + for (; len && !buf[len - 1]; len--); + zfcp_dbf_hexdump(adapter->hba_dbf, rec, sizeof(*rec), level, + buf, len); + } + spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } -- cgit v1.2.3-59-g8ed1b From 10223c60daf226ee2248b772892abc83cd875aa7 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:21:59 +0100 Subject: [SCSI] zfcp: Introduce printf helper functions for debug trace. Introducing helper functions that allow for code simpfifications. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 658053c74707..453343783990 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -95,6 +95,22 @@ zfcp_dbf_view(char *out_buf, const char *label, const char *format, ...) return len; } +static void zfcp_dbf_outs(char **buf, const char *s1, const char *s2) +{ + *buf += sprintf(*buf, "%-24s%s\n", s1, s2); +} + +static void zfcp_dbf_out(char **buf, const char *s, const char *format, ...) +{ + va_list arg; + + *buf += sprintf(*buf, "%-24s", s); + va_start(arg, format); + *buf += vsprintf(*buf, format, arg); + va_end(arg); + *buf += sprintf(*buf, "\n"); +} + static int zfcp_dbf_view_dump(char *out_buf, const char *label, char *buffer, int buflen, int offset, int total_size) -- cgit v1.2.3-59-g8ed1b From d79a83dbffe2e49e73f2903c350937faf2e0c2f1 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:22:00 +0100 Subject: [SCSI] zfcp: Register new recovery trace. This patch registers the new recovery trace with the s390 debug feature. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_aux.c | 1 + drivers/s390/scsi/zfcp_dbf.c | 42 ++++++++++++++++++++++++++++++++++++++++++ drivers/s390/scsi/zfcp_def.h | 10 ++++++++++ 3 files changed, 53 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c index 874b55ed00a3..f3eff7ebcb6b 100644 --- a/drivers/s390/scsi/zfcp_aux.c +++ b/drivers/s390/scsi/zfcp_aux.c @@ -1034,6 +1034,7 @@ zfcp_adapter_enqueue(struct ccw_device *ccw_device) spin_lock_init(&adapter->hba_dbf_lock); spin_lock_init(&adapter->san_dbf_lock); spin_lock_init(&adapter->scsi_dbf_lock); + spin_lock_init(&adapter->rec_dbf_lock); retval = zfcp_adapter_debug_register(adapter); if (retval) diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 453343783990..e7712eb13eea 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -520,6 +520,36 @@ static struct debug_view zfcp_hba_dbf_view = { NULL }; +static const char *zfcp_rec_dbf_tags[] = { +}; + +static const char *zfcp_rec_dbf_ids[] = { +}; + +static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, + char *buf, const char *_rec) +{ + struct zfcp_rec_dbf_record *r = (struct zfcp_rec_dbf_record *)_rec; + char *p = buf; + + zfcp_dbf_outs(&p, "tag", zfcp_rec_dbf_tags[r->id]); + zfcp_dbf_outs(&p, "hint", zfcp_rec_dbf_ids[r->id2]); + zfcp_dbf_out(&p, "id", "%d", r->id2); + switch (r->id) { + } + sprintf(p, "\n"); + return (p - buf) + 1; +} + +static struct debug_view zfcp_rec_dbf_view = { + "structured", + NULL, + &zfcp_dbf_view_header, + &zfcp_rec_dbf_view_format, + NULL, + NULL +}; + static void _zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, u32 s_id, u32 d_id, void *buffer, int buflen) @@ -934,6 +964,16 @@ int zfcp_adapter_debug_register(struct zfcp_adapter *adapter) debug_register_view(adapter->erp_dbf, &debug_hex_ascii_view); debug_set_level(adapter->erp_dbf, 3); + /* debug feature area which records recovery activity */ + sprintf(dbf_name, "zfcp_%s_rec", zfcp_get_busid_by_adapter(adapter)); + adapter->rec_dbf = debug_register(dbf_name, dbfsize, 1, + sizeof(struct zfcp_rec_dbf_record)); + if (!adapter->rec_dbf) + goto failed; + debug_register_view(adapter->rec_dbf, &debug_hex_ascii_view); + debug_register_view(adapter->rec_dbf, &zfcp_rec_dbf_view); + debug_set_level(adapter->rec_dbf, 3); + /* debug feature area which records HBA (FSF and QDIO) conditions */ sprintf(dbf_name, "zfcp_%s_hba", zfcp_get_busid_by_adapter(adapter)); adapter->hba_dbf = debug_register(dbf_name, dbfsize, 1, @@ -981,10 +1021,12 @@ void zfcp_adapter_debug_unregister(struct zfcp_adapter *adapter) debug_unregister(adapter->scsi_dbf); debug_unregister(adapter->san_dbf); debug_unregister(adapter->hba_dbf); + debug_unregister(adapter->rec_dbf); debug_unregister(adapter->erp_dbf); adapter->scsi_dbf = NULL; adapter->san_dbf = NULL; adapter->hba_dbf = NULL; + adapter->rec_dbf = NULL; adapter->erp_dbf = NULL; } diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 662c70f537ec..f29bee528489 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -279,6 +279,13 @@ struct zfcp_erp_dbf_record { u8 dummy[16]; } __attribute__ ((packed)); +struct zfcp_rec_dbf_record { + u8 id; + u8 id2; + union { + } u; +} __attribute__ ((packed)); + struct zfcp_hba_dbf_record_response { u32 fsf_command; u64 fsf_reqid; @@ -917,14 +924,17 @@ struct zfcp_adapter { for memory */ struct zfcp_port *nameserver_port; /* adapter's nameserver */ debug_info_t *erp_dbf; + debug_info_t *rec_dbf; debug_info_t *hba_dbf; debug_info_t *san_dbf; /* debug feature areas */ debug_info_t *scsi_dbf; spinlock_t erp_dbf_lock; + spinlock_t rec_dbf_lock; spinlock_t hba_dbf_lock; spinlock_t san_dbf_lock; spinlock_t scsi_dbf_lock; struct zfcp_erp_dbf_record erp_dbf_buf; + struct zfcp_rec_dbf_record rec_dbf_buf; struct zfcp_hba_dbf_record hba_dbf_buf; struct zfcp_san_dbf_record san_dbf_buf; struct zfcp_scsi_dbf_record scsi_dbf_buf; -- cgit v1.2.3-59-g8ed1b From 348447e85749120ad600a5c8e23b6bb7058b931d Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:22:01 +0100 Subject: [SCSI] zfcp: Add trace records for recovery thread and its queues This patch writes trace records which provide information about the operation of the zfcp error recovery thread and the queues it works on. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 51 ++++++++++++++++++++++++++++++++++++++++++++ drivers/s390/scsi/zfcp_def.h | 12 +++++++++++ drivers/s390/scsi/zfcp_erp.c | 9 ++++++++ drivers/s390/scsi/zfcp_ext.h | 3 +++ 4 files changed, 75 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index e7712eb13eea..5a4b1e9a8b50 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -521,9 +521,19 @@ static struct debug_view zfcp_hba_dbf_view = { }; static const char *zfcp_rec_dbf_tags[] = { + [ZFCP_REC_DBF_ID_THREAD] = "thread", }; static const char *zfcp_rec_dbf_ids[] = { + [1] = "new", + [2] = "ready", + [3] = "kill", + [4] = "down sleep", + [5] = "down wakeup", + [6] = "down sleep ecd", + [7] = "down wakeup ecd", + [8] = "down sleep epd", + [9] = "down wakeup epd", }; static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, @@ -536,6 +546,12 @@ static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_outs(&p, "hint", zfcp_rec_dbf_ids[r->id2]); zfcp_dbf_out(&p, "id", "%d", r->id2); switch (r->id) { + case ZFCP_REC_DBF_ID_THREAD: + zfcp_dbf_out(&p, "sema", "%d", r->u.thread.sema); + zfcp_dbf_out(&p, "total", "%d", r->u.thread.total); + zfcp_dbf_out(&p, "ready", "%d", r->u.thread.ready); + zfcp_dbf_out(&p, "running", "%d", r->u.thread.running); + break; } sprintf(p, "\n"); return (p - buf) + 1; @@ -550,6 +566,41 @@ static struct debug_view zfcp_rec_dbf_view = { NULL }; +/** + * zfcp_rec_dbf_event_thread - trace event related to recovery thread operation + * @id2: identifier for event + * @adapter: adapter + * @lock: non-zero value indicates that erp_lock has not yet been acquired + */ +void zfcp_rec_dbf_event_thread(u8 id2, struct zfcp_adapter *adapter, int lock) +{ + struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf; + unsigned long flags = 0; + struct list_head *entry; + unsigned ready = 0, running = 0, total; + + if (lock) + read_lock_irqsave(&adapter->erp_lock, flags); + list_for_each(entry, &adapter->erp_ready_head) + ready++; + list_for_each(entry, &adapter->erp_running_head) + running++; + total = adapter->erp_total_count; + if (lock) + read_unlock_irqrestore(&adapter->erp_lock, flags); + + spin_lock_irqsave(&adapter->rec_dbf_lock, flags); + memset(r, 0, sizeof(*r)); + r->id = ZFCP_REC_DBF_ID_THREAD; + r->id2 = id2; + r->u.thread.sema = atomic_read(&adapter->erp_ready_sem.count); + r->u.thread.total = total; + r->u.thread.ready = ready; + r->u.thread.running = running; + debug_event(adapter->rec_dbf, 5, r, sizeof(*r)); + spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); +} + static void _zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, u32 s_id, u32 d_id, void *buffer, int buflen) diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index f29bee528489..332c83eba6c7 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -279,13 +279,25 @@ struct zfcp_erp_dbf_record { u8 dummy[16]; } __attribute__ ((packed)); +struct zfcp_rec_dbf_record_thread { + u32 sema; + u32 total; + u32 ready; + u32 running; +} __attribute__ ((packed)); + struct zfcp_rec_dbf_record { u8 id; u8 id2; union { + struct zfcp_rec_dbf_record_thread thread; } u; } __attribute__ ((packed)); +enum { + ZFCP_REC_DBF_ID_THREAD, +}; + struct zfcp_hba_dbf_record_response { u32 fsf_command; u64 fsf_reqid; diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index 2dc8110ebf74..f9383f068169 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -788,6 +788,7 @@ zfcp_erp_action_ready(struct zfcp_erp_action *erp_action) zfcp_erp_action_to_ready(erp_action); up(&adapter->erp_ready_sem); + zfcp_rec_dbf_event_thread(2, adapter, 0); } /* @@ -1027,6 +1028,7 @@ zfcp_erp_thread_kill(struct zfcp_adapter *adapter) atomic_set_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_KILL, &adapter->status); up(&adapter->erp_ready_sem); + zfcp_rec_dbf_event_thread(2, adapter, 1); wait_event(adapter->erp_thread_wqh, !atomic_test_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_UP, @@ -1084,7 +1086,9 @@ zfcp_erp_thread(void *data) * no action in 'ready' queue to be processed and * thread is not to be killed */ + zfcp_rec_dbf_event_thread(4, adapter, 1); down_interruptible(&adapter->erp_ready_sem); + zfcp_rec_dbf_event_thread(5, adapter, 1); debug_text_event(adapter->erp_dbf, 5, "a_th_woken"); } @@ -2150,7 +2154,9 @@ zfcp_erp_adapter_strategy_open_fsf_xconfig(struct zfcp_erp_action *erp_action) * _must_ be the one belonging to the 'exchange config * data' request. */ + zfcp_rec_dbf_event_thread(6, adapter, 1); down(&adapter->erp_ready_sem); + zfcp_rec_dbf_event_thread(7, adapter, 1); if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT) { ZFCP_LOG_INFO("error: exchange of configuration data " "for adapter %s timed out\n", @@ -2207,7 +2213,9 @@ zfcp_erp_adapter_strategy_open_fsf_xport(struct zfcp_erp_action *erp_action) debug_text_event(adapter->erp_dbf, 6, "a_xport_ok"); ret = ZFCP_ERP_SUCCEEDED; + zfcp_rec_dbf_event_thread(8, adapter, 1); down(&adapter->erp_ready_sem); + zfcp_rec_dbf_event_thread(9, adapter, 1); if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT) { ZFCP_LOG_INFO("error: exchange port data timed out (adapter " "%s)\n", zfcp_get_busid_by_adapter(adapter)); @@ -3091,6 +3099,7 @@ zfcp_erp_action_enqueue(int action, /* finally put it into 'ready' queue and kick erp thread */ list_add_tail(&erp_action->list, &adapter->erp_ready_head); up(&adapter->erp_ready_sem); + zfcp_rec_dbf_event_thread(1, adapter, 0); retval = 0; out: return retval; diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h index 06b1079b7f3d..f64951ba98c7 100644 --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -164,6 +164,9 @@ extern void zfcp_erp_port_access_changed(struct zfcp_port *); extern void zfcp_erp_unit_access_changed(struct zfcp_unit *); /******************************** AUX ****************************************/ +extern void zfcp_rec_dbf_event_thread(u8 id, struct zfcp_adapter *adapter, + int lock); + extern void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *); extern void zfcp_hba_dbf_event_fsf_unsol(const char *, struct zfcp_adapter *, struct fsf_status_read_buffer *); -- cgit v1.2.3-59-g8ed1b From 698ec01635819c5ae60090bb4efcbeffc41642fb Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:22:02 +0100 Subject: [SCSI] zfcp: Add traces for state changes. This patch writes a trace record which provides information about state changes for adapters, ports and units, e.g. target failure, targets becoming online, targets being temporarily blocked due to pending recovery, targets which have been recovered successfully etc. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_ccw.c | 6 +- drivers/s390/scsi/zfcp_dbf.c | 126 +++++++++++++++++++++++++++ drivers/s390/scsi/zfcp_def.h | 11 +++ drivers/s390/scsi/zfcp_erp.c | 153 ++++++++++++++++++++------------- drivers/s390/scsi/zfcp_ext.h | 24 +++--- drivers/s390/scsi/zfcp_fsf.c | 70 +++++++-------- drivers/s390/scsi/zfcp_scsi.c | 2 +- drivers/s390/scsi/zfcp_sysfs_adapter.c | 4 +- drivers/s390/scsi/zfcp_sysfs_port.c | 3 +- drivers/s390/scsi/zfcp_sysfs_unit.c | 3 +- 10 files changed, 288 insertions(+), 114 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c index edc5015e920d..dd40b1c350b9 100644 --- a/drivers/s390/scsi/zfcp_ccw.c +++ b/drivers/s390/scsi/zfcp_ccw.c @@ -170,8 +170,8 @@ zfcp_ccw_set_online(struct ccw_device *ccw_device) BUG_ON(!zfcp_reqlist_isempty(adapter)); adapter->req_no = 0; - zfcp_erp_modify_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING, - ZFCP_SET); + zfcp_erp_modify_adapter_status(adapter, 10, 0, + ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED); zfcp_erp_wait(adapter); goto out; @@ -236,7 +236,7 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event) ZFCP_LOG_NORMAL("adapter %s: operational again\n", zfcp_get_busid_by_adapter(adapter)); debug_text_event(adapter->erp_dbf,1,"dev_oper"); - zfcp_erp_modify_adapter_status(adapter, + zfcp_erp_modify_adapter_status(adapter, 11, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 5a4b1e9a8b50..2fcfe9bec554 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -522,6 +522,7 @@ static struct debug_view zfcp_hba_dbf_view = { static const char *zfcp_rec_dbf_tags[] = { [ZFCP_REC_DBF_ID_THREAD] = "thread", + [ZFCP_REC_DBF_ID_TARGET] = "target", }; static const char *zfcp_rec_dbf_ids[] = { @@ -534,6 +535,58 @@ static const char *zfcp_rec_dbf_ids[] = { [7] = "down wakeup ecd", [8] = "down sleep epd", [9] = "down wakeup epd", + [10] = "online", + [11] = "operational", + [12] = "scsi slave destroy", + [13] = "propagate failed adapter", + [14] = "propagate failed port", + [15] = "block adapter", + [16] = "unblock adapter", + [17] = "block port", + [18] = "unblock port", + [19] = "block unit", + [20] = "unblock unit", + [21] = "unit recovery failed", + [22] = "port recovery failed", + [23] = "adapter recovery failed", + [24] = "qdio queues down", + [25] = "p2p failed", + [26] = "nameserver lookup failed", + [27] = "nameserver port failed", + [28] = "link up", + [29] = "link down", + [30] = "link up status read", + [31] = "open port failed", + [32] = "open port failed", + [33] = "close port", + [34] = "open unit failed", + [35] = "exclusive open unit failed", + [36] = "shared open unit failed", + [37] = "link down", + [38] = "link down status read no link", + [39] = "link down status read fdisc login", + [40] = "link down status read firmware update", + [41] = "link down status read unknown reason", + [42] = "link down ecd incomplete", + [43] = "link down epd incomplete", + [44] = "sysfs adapter recovery", + [45] = "sysfs port recovery", + [46] = "sysfs unit recovery", + [47] = "port boxed abort", + [48] = "unit boxed abort", + [49] = "port boxed ct", + [50] = "port boxed close physical", + [51] = "port boxed open unit", + [52] = "port boxed close unit", + [53] = "port boxed fcp", + [54] = "unit boxed fcp", + [55] = "port access denied ct", + [56] = "port access denied els", + [57] = "port access denied open port", + [58] = "port access denied close physical", + [59] = "unit access denied open unit", + [60] = "shared unit access denied open unit", + [61] = "unit access denied fcp", }; static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, @@ -552,6 +605,14 @@ static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_out(&p, "ready", "%d", r->u.thread.ready); zfcp_dbf_out(&p, "running", "%d", r->u.thread.running); break; + case ZFCP_REC_DBF_ID_TARGET: + zfcp_dbf_out(&p, "reference", "0x%016Lx", r->u.target.ref); + zfcp_dbf_out(&p, "status", "0x%08x", r->u.target.status); + zfcp_dbf_out(&p, "erp_count", "%d", r->u.target.erp_count); + zfcp_dbf_out(&p, "d_id", "0x%06x", r->u.target.d_id); + zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->u.target.wwpn); + zfcp_dbf_out(&p, "fcp_lun", "0x%016Lx", r->u.target.fcp_lun); + break; } sprintf(p, "\n"); return (p - buf) + 1; @@ -601,6 +662,71 @@ void zfcp_rec_dbf_event_thread(u8 id2, struct zfcp_adapter *adapter, int lock) spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); } +static void zfcp_rec_dbf_event_target(u8 id2, u64 ref, + struct zfcp_adapter *adapter, + atomic_t *status, atomic_t *erp_count, + u64 wwpn, u32 d_id, u64 fcp_lun) +{ + struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf; + unsigned long flags; + + spin_lock_irqsave(&adapter->rec_dbf_lock, flags); + memset(r, 0, sizeof(*r)); + r->id = ZFCP_REC_DBF_ID_TARGET; + r->id2 = id2; + r->u.target.ref = ref; + r->u.target.status = atomic_read(status); + r->u.target.wwpn = wwpn; + r->u.target.d_id = d_id; + r->u.target.fcp_lun = fcp_lun; + r->u.target.erp_count = atomic_read(erp_count); + debug_event(adapter->rec_dbf, 3, r, sizeof(*r)); + spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); +} + +/** + * zfcp_rec_dbf_event_adapter - trace event for adapter state change + * @id: identifier for trigger of state change + * @ref: additional reference (e.g. request) + * @adapter: adapter + */ +void zfcp_rec_dbf_event_adapter(u8 id, u64 ref, struct zfcp_adapter *adapter) +{ + zfcp_rec_dbf_event_target(id, ref, adapter, &adapter->status, + &adapter->erp_counter, 0, 0, 0); +} + +/** + * zfcp_rec_dbf_event_port - trace event for port state change + * @id: identifier for trigger of state change + * @ref: additional reference (e.g. request) + * @port: port + */ +void zfcp_rec_dbf_event_port(u8 id, u64 ref, struct zfcp_port *port) +{ + struct zfcp_adapter *adapter = port->adapter; + + zfcp_rec_dbf_event_target(id, ref, adapter, &port->status, + &port->erp_counter, port->wwpn, port->d_id, + 0); +} + +/** + * zfcp_rec_dbf_event_unit - trace event for unit state change + * @id: identifier for trigger of state change + * @ref: additional reference (e.g. request) + * @unit: unit + */ +void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit) +{ + struct zfcp_port *port = unit->port; + struct zfcp_adapter *adapter = port->adapter; + + zfcp_rec_dbf_event_target(id, ref, adapter, &unit->status, + &unit->erp_counter, port->wwpn, port->d_id, + unit->fcp_lun); +} + static void _zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, u32 s_id, u32 d_id, void *buffer, int buflen) diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 332c83eba6c7..659ec739706a 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -286,16 +286,27 @@ struct zfcp_rec_dbf_record_thread { u32 running; } __attribute__ ((packed)); +struct zfcp_rec_dbf_record_target { + u64 ref; + u32 status; + u32 d_id; + u64 wwpn; + u64 fcp_lun; + u32 erp_count; +} __attribute__ ((packed)); + struct zfcp_rec_dbf_record { u8 id; u8 id2; union { struct zfcp_rec_dbf_record_thread thread; + struct zfcp_rec_dbf_record_target target; } u; } __attribute__ ((packed)); enum { ZFCP_REC_DBF_ID_THREAD, + ZFCP_REC_DBF_ID_TARGET, }; struct zfcp_hba_dbf_record_response { diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index f9383f068169..bee03443efd4 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -163,7 +163,7 @@ static void zfcp_close_fsf(struct zfcp_adapter *adapter) /* reset FSF request sequence number */ adapter->fsf_req_seq_no = 0; /* all ports and units are closed */ - zfcp_erp_modify_adapter_status(adapter, + zfcp_erp_modify_adapter_status(adapter, 24, 0, ZFCP_STATUS_COMMON_OPEN, ZFCP_CLEAR); } @@ -216,7 +216,7 @@ zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, int clear_mask) zfcp_get_busid_by_adapter(adapter)); debug_text_event(adapter->erp_dbf, 5, "a_ro_f"); /* ensure propagation of failed status to new devices */ - zfcp_erp_adapter_failed(adapter); + zfcp_erp_adapter_failed(adapter, 13, 0); retval = -EIO; goto out; } @@ -572,7 +572,7 @@ zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask) debug_text_event(adapter->erp_dbf, 5, "p_ro_f"); debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); /* ensure propagation of failed status to new devices */ - zfcp_erp_port_failed(port); + zfcp_erp_port_failed(port, 14, 0); retval = -EIO; goto out; } @@ -688,18 +688,44 @@ zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask) static void zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int clear_mask) { debug_text_event(adapter->erp_dbf, 6, "a_bl"); - zfcp_erp_modify_adapter_status(adapter, + zfcp_erp_modify_adapter_status(adapter, 15, 0, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); } +/* FIXME: isn't really atomic */ +/* + * returns the mask which has not been set so far, i.e. + * 0 if no bit has been changed, !0 if some bit has been changed + */ +static int atomic_test_and_set_mask(unsigned long mask, atomic_t *v) +{ + int changed_bits = (atomic_read(v) /*XOR*/^ mask) & mask; + atomic_set_mask(mask, v); + return changed_bits; +} + +/* FIXME: isn't really atomic */ +/* + * returns the mask which has not been cleared so far, i.e. + * 0 if no bit has been changed, !0 if some bit has been changed + */ +static int atomic_test_and_clear_mask(unsigned long mask, atomic_t *v) +{ + int changed_bits = atomic_read(v) & mask; + atomic_clear_mask(mask, v); + return changed_bits; +} + /** * zfcp_erp_adapter_unblock - mark adapter as unblocked, allow scsi requests */ static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) { debug_text_event(adapter->erp_dbf, 6, "a_ubl"); - atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status); + if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, + &adapter->status)) + zfcp_rec_dbf_event_adapter(16, 0, adapter); } /* @@ -718,7 +744,7 @@ zfcp_erp_port_block(struct zfcp_port *port, int clear_mask) debug_text_event(adapter->erp_dbf, 6, "p_bl"); debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); - zfcp_erp_modify_port_status(port, + zfcp_erp_modify_port_status(port, 17, 0, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); } @@ -737,7 +763,9 @@ zfcp_erp_port_unblock(struct zfcp_port *port) debug_text_event(adapter->erp_dbf, 6, "p_ubl"); debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); - atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status); + if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, + &port->status)) + zfcp_rec_dbf_event_port(18, 0, port); } /* @@ -756,7 +784,7 @@ zfcp_erp_unit_block(struct zfcp_unit *unit, int clear_mask) debug_text_event(adapter->erp_dbf, 6, "u_bl"); debug_event(adapter->erp_dbf, 6, &unit->fcp_lun, sizeof (fcp_lun_t)); - zfcp_erp_modify_unit_status(unit, + zfcp_erp_modify_unit_status(unit, 19, 0, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); } @@ -775,7 +803,9 @@ zfcp_erp_unit_unblock(struct zfcp_unit *unit) debug_text_event(adapter->erp_dbf, 6, "u_ubl"); debug_event(adapter->erp_dbf, 6, &unit->fcp_lun, sizeof (fcp_lun_t)); - atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status); + if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, + &unit->status)) + zfcp_rec_dbf_event_unit(20, 0, unit); } static void @@ -1357,9 +1387,9 @@ zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action) * */ void -zfcp_erp_adapter_failed(struct zfcp_adapter *adapter) +zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, u8 id, u64 ref) { - zfcp_erp_modify_adapter_status(adapter, + zfcp_erp_modify_adapter_status(adapter, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); ZFCP_LOG_NORMAL("adapter erp failed on adapter %s\n", zfcp_get_busid_by_adapter(adapter)); @@ -1373,9 +1403,9 @@ zfcp_erp_adapter_failed(struct zfcp_adapter *adapter) * */ void -zfcp_erp_port_failed(struct zfcp_port *port) +zfcp_erp_port_failed(struct zfcp_port *port, u8 id, u64 ref) { - zfcp_erp_modify_port_status(port, + zfcp_erp_modify_port_status(port, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); if (atomic_test_mask(ZFCP_STATUS_PORT_WKA, &port->status)) @@ -1397,9 +1427,9 @@ zfcp_erp_port_failed(struct zfcp_port *port) * */ void -zfcp_erp_unit_failed(struct zfcp_unit *unit) +zfcp_erp_unit_failed(struct zfcp_unit *unit, u8 id, u64 ref) { - zfcp_erp_modify_unit_status(unit, + zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); ZFCP_LOG_NORMAL("unit erp failed on unit 0x%016Lx on port 0x%016Lx " @@ -1522,7 +1552,7 @@ zfcp_erp_strategy_check_unit(struct zfcp_unit *unit, int result) case ZFCP_ERP_FAILED : atomic_inc(&unit->erp_counter); if (atomic_read(&unit->erp_counter) > ZFCP_MAX_ERPS) - zfcp_erp_unit_failed(unit); + zfcp_erp_unit_failed(unit, 21, 0); break; case ZFCP_ERP_EXIT : /* nothing */ @@ -1551,7 +1581,7 @@ zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) case ZFCP_ERP_FAILED : atomic_inc(&port->erp_counter); if (atomic_read(&port->erp_counter) > ZFCP_MAX_ERPS) - zfcp_erp_port_failed(port); + zfcp_erp_port_failed(port, 22, 0); break; case ZFCP_ERP_EXIT : /* nothing */ @@ -1579,7 +1609,7 @@ zfcp_erp_strategy_check_adapter(struct zfcp_adapter *adapter, int result) case ZFCP_ERP_FAILED : atomic_inc(&adapter->erp_counter); if (atomic_read(&adapter->erp_counter) > ZFCP_MAX_ERPS) - zfcp_erp_adapter_failed(adapter); + zfcp_erp_adapter_failed(adapter, 23, 0); break; case ZFCP_ERP_EXIT : /* nothing */ @@ -1737,29 +1767,30 @@ zfcp_erp_wait(struct zfcp_adapter *adapter) return retval; } -void -zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, - u32 mask, int set_or_clear) +void zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, u8 id, + u64 ref, u32 mask, int set_or_clear) { struct zfcp_port *port; - u32 common_mask = mask & ZFCP_COMMON_FLAGS; + u32 changed, common_mask = mask & ZFCP_COMMON_FLAGS; if (set_or_clear == ZFCP_SET) { - atomic_set_mask(mask, &adapter->status); + changed = atomic_test_and_set_mask(mask, &adapter->status); debug_text_event(adapter->erp_dbf, 3, "a_mod_as_s"); } else { - atomic_clear_mask(mask, &adapter->status); + changed = atomic_test_and_clear_mask(mask, &adapter->status); if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) atomic_set(&adapter->erp_counter, 0); debug_text_event(adapter->erp_dbf, 3, "a_mod_as_c"); } + if (changed) + zfcp_rec_dbf_event_adapter(id, ref, adapter); debug_event(adapter->erp_dbf, 3, &mask, sizeof (u32)); /* Deal with all underlying devices, only pass common_mask */ if (common_mask) list_for_each_entry(port, &adapter->port_list_head, list) - zfcp_erp_modify_port_status(port, common_mask, - set_or_clear); + zfcp_erp_modify_port_status(port, id, ref, common_mask, + set_or_clear); } /* @@ -1768,29 +1799,31 @@ zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, * purpose: sets the port and all underlying devices to ERP_FAILED * */ -void -zfcp_erp_modify_port_status(struct zfcp_port *port, u32 mask, int set_or_clear) +void zfcp_erp_modify_port_status(struct zfcp_port *port, u8 id, u64 ref, + u32 mask, int set_or_clear) { struct zfcp_unit *unit; - u32 common_mask = mask & ZFCP_COMMON_FLAGS; + u32 changed, common_mask = mask & ZFCP_COMMON_FLAGS; if (set_or_clear == ZFCP_SET) { - atomic_set_mask(mask, &port->status); + changed = atomic_test_and_set_mask(mask, &port->status); debug_text_event(port->adapter->erp_dbf, 3, "p_mod_ps_s"); } else { - atomic_clear_mask(mask, &port->status); + changed = atomic_test_and_clear_mask(mask, &port->status); if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) atomic_set(&port->erp_counter, 0); debug_text_event(port->adapter->erp_dbf, 3, "p_mod_ps_c"); } + if (changed) + zfcp_rec_dbf_event_port(id, ref, port); debug_event(port->adapter->erp_dbf, 3, &port->wwpn, sizeof (wwn_t)); debug_event(port->adapter->erp_dbf, 3, &mask, sizeof (u32)); /* Modify status of all underlying devices, only pass common mask */ if (common_mask) list_for_each_entry(unit, &port->unit_list_head, list) - zfcp_erp_modify_unit_status(unit, common_mask, - set_or_clear); + zfcp_erp_modify_unit_status(unit, id, ref, common_mask, + set_or_clear); } /* @@ -1799,19 +1832,23 @@ zfcp_erp_modify_port_status(struct zfcp_port *port, u32 mask, int set_or_clear) * purpose: sets the unit to ERP_FAILED * */ -void -zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u32 mask, int set_or_clear) +void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, u64 ref, + u32 mask, int set_or_clear) { + u32 changed; + if (set_or_clear == ZFCP_SET) { - atomic_set_mask(mask, &unit->status); + changed = atomic_test_and_set_mask(mask, &unit->status); debug_text_event(unit->port->adapter->erp_dbf, 3, "u_mod_us_s"); } else { - atomic_clear_mask(mask, &unit->status); + changed = atomic_test_and_clear_mask(mask, &unit->status); if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) { atomic_set(&unit->erp_counter, 0); } debug_text_event(unit->port->adapter->erp_dbf, 3, "u_mod_us_c"); } + if (changed) + zfcp_rec_dbf_event_unit(id, ref, unit); debug_event(unit->port->adapter->erp_dbf, 3, &unit->fcp_lun, sizeof (fcp_lun_t)); debug_event(unit->port->adapter->erp_dbf, 3, &mask, sizeof (u32)); @@ -2403,7 +2440,7 @@ zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *erp_action) port->wwpn, zfcp_get_busid_by_adapter(adapter), adapter->peer_wwpn); - zfcp_erp_port_failed(port); + zfcp_erp_port_failed(port, 25, 0); retval = ZFCP_ERP_FAILED; break; } @@ -2461,7 +2498,7 @@ zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *erp_action) "for port 0x%016Lx " "(misconfigured WWPN?)\n", port->wwpn); - zfcp_erp_port_failed(port); + zfcp_erp_port_failed(port, 26, 0); retval = ZFCP_ERP_EXIT; } else { ZFCP_LOG_DEBUG("nameserver look-up failed for " @@ -2567,7 +2604,7 @@ zfcp_erp_port_strategy_open_nameserver_wakeup(struct zfcp_erp_action if (atomic_test_mask( ZFCP_STATUS_COMMON_ERP_FAILED, &adapter->nameserver_port->status)) - zfcp_erp_port_failed(erp_action->port); + zfcp_erp_port_failed(erp_action->port, 27, 0); zfcp_erp_action_ready(erp_action); } } @@ -3274,8 +3311,7 @@ static void zfcp_erp_action_to_ready(struct zfcp_erp_action *erp_action) list_move(&erp_action->list, &erp_action->adapter->erp_ready_head); } -void -zfcp_erp_port_boxed(struct zfcp_port *port) +void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) { struct zfcp_adapter *adapter = port->adapter; unsigned long flags; @@ -3283,28 +3319,24 @@ zfcp_erp_port_boxed(struct zfcp_port *port) debug_text_event(adapter->erp_dbf, 3, "p_access_boxed"); debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof(wwn_t)); read_lock_irqsave(&zfcp_data.config_lock, flags); - zfcp_erp_modify_port_status(port, - ZFCP_STATUS_COMMON_ACCESS_BOXED, - ZFCP_SET); + zfcp_erp_modify_port_status(port, id, ref, + ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); read_unlock_irqrestore(&zfcp_data.config_lock, flags); zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED); } -void -zfcp_erp_unit_boxed(struct zfcp_unit *unit) +void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, u64 ref) { struct zfcp_adapter *adapter = unit->port->adapter; debug_text_event(adapter->erp_dbf, 3, "u_access_boxed"); debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof(fcp_lun_t)); - zfcp_erp_modify_unit_status(unit, - ZFCP_STATUS_COMMON_ACCESS_BOXED, - ZFCP_SET); + zfcp_erp_modify_unit_status(unit, id, ref, + ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED); } -void -zfcp_erp_port_access_denied(struct zfcp_port *port) +void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, u64 ref) { struct zfcp_adapter *adapter = port->adapter; unsigned long flags; @@ -3312,24 +3344,21 @@ zfcp_erp_port_access_denied(struct zfcp_port *port) debug_text_event(adapter->erp_dbf, 3, "p_access_denied"); debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof(wwn_t)); read_lock_irqsave(&zfcp_data.config_lock, flags); - zfcp_erp_modify_port_status(port, - ZFCP_STATUS_COMMON_ERP_FAILED | - ZFCP_STATUS_COMMON_ACCESS_DENIED, - ZFCP_SET); + zfcp_erp_modify_port_status(port, id, ref, + ZFCP_STATUS_COMMON_ERP_FAILED | + ZFCP_STATUS_COMMON_ACCESS_DENIED, ZFCP_SET); read_unlock_irqrestore(&zfcp_data.config_lock, flags); } -void -zfcp_erp_unit_access_denied(struct zfcp_unit *unit) +void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, u64 ref) { struct zfcp_adapter *adapter = unit->port->adapter; debug_text_event(adapter->erp_dbf, 3, "u_access_denied"); debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof(fcp_lun_t)); - zfcp_erp_modify_unit_status(unit, - ZFCP_STATUS_COMMON_ERP_FAILED | - ZFCP_STATUS_COMMON_ACCESS_DENIED, - ZFCP_SET); + zfcp_erp_modify_unit_status(unit, id, ref, + ZFCP_STATUS_COMMON_ERP_FAILED | + ZFCP_STATUS_COMMON_ACCESS_DENIED, ZFCP_SET); } void diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h index f64951ba98c7..20ad6fde2e22 100644 --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -131,22 +131,23 @@ extern int zfcp_scsi_command_sync(struct zfcp_unit *, struct scsi_cmnd *, int); extern struct fc_function_template zfcp_transport_functions; /******************************** ERP ****************************************/ -extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, u32, int); +extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, u8, u64, u32, + int); extern int zfcp_erp_adapter_reopen(struct zfcp_adapter *, int); extern int zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int); -extern void zfcp_erp_adapter_failed(struct zfcp_adapter *); +extern void zfcp_erp_adapter_failed(struct zfcp_adapter *, u8, u64); -extern void zfcp_erp_modify_port_status(struct zfcp_port *, u32, int); +extern void zfcp_erp_modify_port_status(struct zfcp_port *, u8, u64, u32, int); extern int zfcp_erp_port_reopen(struct zfcp_port *, int); extern int zfcp_erp_port_shutdown(struct zfcp_port *, int); extern int zfcp_erp_port_forced_reopen(struct zfcp_port *, int); -extern void zfcp_erp_port_failed(struct zfcp_port *); +extern void zfcp_erp_port_failed(struct zfcp_port *, u8, u64); extern int zfcp_erp_port_reopen_all(struct zfcp_adapter *, int); -extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, u32, int); +extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, u8, u64, u32, int); extern int zfcp_erp_unit_reopen(struct zfcp_unit *, int); extern int zfcp_erp_unit_shutdown(struct zfcp_unit *, int); -extern void zfcp_erp_unit_failed(struct zfcp_unit *); +extern void zfcp_erp_unit_failed(struct zfcp_unit *, u8, u64); extern int zfcp_erp_thread_setup(struct zfcp_adapter *); extern int zfcp_erp_thread_kill(struct zfcp_adapter *); @@ -155,10 +156,10 @@ extern void zfcp_erp_async_handler(struct zfcp_erp_action *, unsigned long); extern int zfcp_test_link(struct zfcp_port *); -extern void zfcp_erp_port_boxed(struct zfcp_port *); -extern void zfcp_erp_unit_boxed(struct zfcp_unit *); -extern void zfcp_erp_port_access_denied(struct zfcp_port *); -extern void zfcp_erp_unit_access_denied(struct zfcp_unit *); +extern void zfcp_erp_port_boxed(struct zfcp_port *, u8 id, u64 ref); +extern void zfcp_erp_unit_boxed(struct zfcp_unit *, u8 id, u64 ref); +extern void zfcp_erp_port_access_denied(struct zfcp_port *, u8 id, u64 ref); +extern void zfcp_erp_unit_access_denied(struct zfcp_unit *, u8 id, u64 ref); extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *); extern void zfcp_erp_port_access_changed(struct zfcp_port *); extern void zfcp_erp_unit_access_changed(struct zfcp_unit *); @@ -166,6 +167,9 @@ extern void zfcp_erp_unit_access_changed(struct zfcp_unit *); /******************************** AUX ****************************************/ extern void zfcp_rec_dbf_event_thread(u8 id, struct zfcp_adapter *adapter, int lock); +extern void zfcp_rec_dbf_event_adapter(u8 id, u64 ref, struct zfcp_adapter *); +extern void zfcp_rec_dbf_event_port(u8 id, u64 ref, struct zfcp_port *port); +extern void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit); extern void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *); extern void zfcp_hba_dbf_event_fsf_unsol(const char *, struct zfcp_adapter *, diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 264f5f1bdde6..2b7ddb78767c 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -46,7 +46,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *); static int zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *); static int zfcp_fsf_fsfstatus_eval(struct zfcp_fsf_req *); static int zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *); -static void zfcp_fsf_link_down_info_eval(struct zfcp_adapter *, +static void zfcp_fsf_link_down_info_eval(struct zfcp_fsf_req *, u8, struct fsf_link_down_info *); static int zfcp_fsf_req_dispatch(struct zfcp_fsf_req *); @@ -342,7 +342,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) break; case FSF_PROT_LINK_DOWN: - zfcp_fsf_link_down_info_eval(adapter, + zfcp_fsf_link_down_info_eval(fsf_req, 37, &prot_status_qual->link_down_info); zfcp_erp_adapter_reopen(adapter, 0); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -354,8 +354,8 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "Re-starting operations on this adapter.\n", zfcp_get_busid_by_adapter(adapter)); /* All ports should be marked as ready to run again */ - zfcp_erp_modify_adapter_status(adapter, - ZFCP_STATUS_COMMON_RUNNING, + zfcp_erp_modify_adapter_status(adapter, 28, + 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED @@ -506,9 +506,11 @@ zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *fsf_req) * zfcp_fsf_link_down_info_eval - evaluate link down information block */ static void -zfcp_fsf_link_down_info_eval(struct zfcp_adapter *adapter, +zfcp_fsf_link_down_info_eval(struct zfcp_fsf_req *fsf_req, u8 id, struct fsf_link_down_info *link_down) { + struct zfcp_adapter *adapter = fsf_req->adapter; + if (atomic_test_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status)) return; @@ -599,7 +601,7 @@ zfcp_fsf_link_down_info_eval(struct zfcp_adapter *adapter, link_down->vendor_specific_code); out: - zfcp_erp_adapter_failed(adapter); + zfcp_erp_adapter_failed(adapter, id, (u64)fsf_req); } /* @@ -897,7 +899,7 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) case FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK: ZFCP_LOG_INFO("Physical link to adapter %s is down\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_fsf_link_down_info_eval(adapter, + zfcp_fsf_link_down_info_eval(fsf_req, 38, (struct fsf_link_down_info *) &status_buffer->payload); break; @@ -905,7 +907,7 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_INFO("Local link to adapter %s is down " "due to failed FDISC login\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_fsf_link_down_info_eval(adapter, + zfcp_fsf_link_down_info_eval(fsf_req, 39, (struct fsf_link_down_info *) &status_buffer->payload); break; @@ -913,13 +915,13 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_INFO("Local link to adapter %s is down " "due to firmware update on adapter\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_fsf_link_down_info_eval(adapter, NULL); + zfcp_fsf_link_down_info_eval(fsf_req, 40, NULL); break; default: ZFCP_LOG_INFO("Local link to adapter %s is down " "due to unknown reason\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_fsf_link_down_info_eval(adapter, NULL); + zfcp_fsf_link_down_info_eval(fsf_req, 41, NULL); }; break; @@ -928,7 +930,7 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) "Restarting operations on this adapter\n", zfcp_get_busid_by_adapter(adapter)); /* All ports should be marked as ready to run again */ - zfcp_erp_modify_adapter_status(adapter, + zfcp_erp_modify_adapter_status(adapter, 30, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, @@ -1215,7 +1217,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) zfcp_get_busid_by_unit(unit)); debug_text_event(new_fsf_req->adapter->erp_dbf, 2, "fsf_s_pboxed"); - zfcp_erp_port_boxed(unit->port); + zfcp_erp_port_boxed(unit->port, 47, (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -1227,7 +1229,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); debug_text_event(new_fsf_req->adapter->erp_dbf, 1, "fsf_s_lboxed"); - zfcp_erp_unit_boxed(unit); + zfcp_erp_unit_boxed(unit, 48, (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -1519,7 +1521,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) } } debug_text_event(adapter->erp_dbf, 1, "fsf_s_access"); - zfcp_erp_port_access_denied(port); + zfcp_erp_port_access_denied(port, 55, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1554,7 +1556,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) "(adapter %s, port d_id=0x%06x)\n", zfcp_get_busid_by_port(port), port->d_id); debug_text_event(adapter->erp_dbf, 2, "fsf_s_pboxed"); - zfcp_erp_port_boxed(port); + zfcp_erp_port_boxed(port, 49, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -1880,7 +1882,7 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) } debug_text_event(adapter->erp_dbf, 1, "fsf_s_access"); if (port != NULL) - zfcp_erp_port_access_denied(port); + zfcp_erp_port_access_denied(port, 56, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2210,7 +2212,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status); - zfcp_fsf_link_down_info_eval(adapter, + zfcp_fsf_link_down_info_eval(fsf_req, 42, &qtcb->header.fsf_status_qual.link_down_info); break; default: @@ -2393,7 +2395,7 @@ zfcp_fsf_exchange_port_data_handler(struct zfcp_fsf_req *fsf_req) case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE: zfcp_fsf_exchange_port_evaluate(fsf_req, 0); atomic_set_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status); - zfcp_fsf_link_down_info_eval(adapter, + zfcp_fsf_link_down_info_eval(fsf_req, 43, &qtcb->header.fsf_status_qual.link_down_info); break; default: @@ -2523,7 +2525,7 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) } } debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_access"); - zfcp_erp_port_access_denied(port); + zfcp_erp_port_access_denied(port, 57, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2534,7 +2536,7 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) port->wwpn, zfcp_get_busid_by_port(port)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_max_ports"); - zfcp_erp_port_failed(port); + zfcp_erp_port_failed(port, 31, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2560,7 +2562,7 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_port(port)); debug_text_exception(fsf_req->adapter->erp_dbf, 0, "fsf_sq_no_retry"); - zfcp_erp_port_failed(port); + zfcp_erp_port_failed(port, 32, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: @@ -2773,7 +2775,7 @@ zfcp_fsf_close_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_TRACE("remote port 0x016%Lx on adapter %s closed, " "port handle 0x%x\n", port->wwpn, zfcp_get_busid_by_port(port), port->handle); - zfcp_erp_modify_port_status(port, + zfcp_erp_modify_port_status(port, 33, (u64)fsf_req, ZFCP_STATUS_COMMON_OPEN, ZFCP_CLEAR); retval = 0; @@ -2923,7 +2925,7 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) } } debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_access"); - zfcp_erp_port_access_denied(port); + zfcp_erp_port_access_denied(port, 58, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2934,7 +2936,7 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) port->wwpn, zfcp_get_busid_by_port(port)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_pboxed"); - zfcp_erp_port_boxed(port); + zfcp_erp_port_boxed(port, 50, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -3159,7 +3161,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) } } debug_text_event(adapter->erp_dbf, 1, "fsf_s_access"); - zfcp_erp_unit_access_denied(unit); + zfcp_erp_unit_access_denied(unit, 59, (u64)fsf_req); atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status); atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3170,7 +3172,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); debug_text_event(adapter->erp_dbf, 2, "fsf_s_pboxed"); - zfcp_erp_port_boxed(unit->port); + zfcp_erp_port_boxed(unit->port, 51, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -3212,7 +3214,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(adapter->erp_dbf, 2, "fsf_s_l_sh_vio"); - zfcp_erp_unit_access_denied(unit); + zfcp_erp_unit_access_denied(unit, 60, (u64)fsf_req); atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status); atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3228,7 +3230,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit)); debug_text_event(adapter->erp_dbf, 1, "fsf_s_max_units"); - zfcp_erp_unit_failed(unit); + zfcp_erp_unit_failed(unit, 34, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3307,13 +3309,13 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) if (exclusive && !readwrite) { ZFCP_LOG_NORMAL("exclusive access of read-only " "unit not supported\n"); - zfcp_erp_unit_failed(unit); + zfcp_erp_unit_failed(unit, 35, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; zfcp_erp_unit_shutdown(unit, 0); } else if (!exclusive && readwrite) { ZFCP_LOG_NORMAL("shared access of read-write " "unit not supported\n"); - zfcp_erp_unit_failed(unit); + zfcp_erp_unit_failed(unit, 36, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; zfcp_erp_unit_shutdown(unit, 0); } @@ -3471,7 +3473,7 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) unit->port->wwpn, zfcp_get_busid_by_unit(unit)); debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_s_pboxed"); - zfcp_erp_port_boxed(unit->port); + zfcp_erp_port_boxed(unit->port, 52, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -3928,7 +3930,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) } } debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_access"); - zfcp_erp_unit_access_denied(unit); + zfcp_erp_unit_access_denied(unit, 61, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3967,7 +3969,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_s_pboxed"); - zfcp_erp_port_boxed(unit->port); + zfcp_erp_port_boxed(unit->port, 53, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -3978,7 +3980,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), unit->port->wwpn, unit->fcp_lun); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_lboxed"); - zfcp_erp_unit_boxed(unit); + zfcp_erp_unit_boxed(unit, 54, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c index ff97a61ad964..1198f0b27e3a 100644 --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -185,7 +185,7 @@ static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt) atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status); sdpnt->hostdata = NULL; unit->device = NULL; - zfcp_erp_unit_failed(unit); + zfcp_erp_unit_failed(unit, 12, 0); zfcp_unit_put(unit); } else ZFCP_LOG_NORMAL("bug: no unit associated with SCSI device at " diff --git a/drivers/s390/scsi/zfcp_sysfs_adapter.c b/drivers/s390/scsi/zfcp_sysfs_adapter.c index 705c6d4428f3..ec340530c824 100644 --- a/drivers/s390/scsi/zfcp_sysfs_adapter.c +++ b/drivers/s390/scsi/zfcp_sysfs_adapter.c @@ -191,8 +191,8 @@ zfcp_sysfs_adapter_failed_store(struct device *dev, struct device_attribute *att goto out; } - zfcp_erp_modify_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING, - ZFCP_SET); + zfcp_erp_modify_adapter_status(adapter, 44, 0, + ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED); zfcp_erp_wait(adapter); out: diff --git a/drivers/s390/scsi/zfcp_sysfs_port.c b/drivers/s390/scsi/zfcp_sysfs_port.c index 1320c0591431..9cc2bc5be724 100644 --- a/drivers/s390/scsi/zfcp_sysfs_port.c +++ b/drivers/s390/scsi/zfcp_sysfs_port.c @@ -193,7 +193,8 @@ zfcp_sysfs_port_failed_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_modify_port_status(port, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); + zfcp_erp_modify_port_status(port, 45, 0, + ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED); zfcp_erp_wait(port->adapter); out: diff --git a/drivers/s390/scsi/zfcp_sysfs_unit.c b/drivers/s390/scsi/zfcp_sysfs_unit.c index 63f75ee95c33..52a5f6a25ff9 100644 --- a/drivers/s390/scsi/zfcp_sysfs_unit.c +++ b/drivers/s390/scsi/zfcp_sysfs_unit.c @@ -94,7 +94,8 @@ zfcp_sysfs_unit_failed_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_modify_unit_status(unit, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); + zfcp_erp_modify_unit_status(unit, 46, 0, + ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED); zfcp_erp_wait(unit->port->adapter); out: -- cgit v1.2.3-59-g8ed1b From 9467a9b3efdd9041202f71cc270bda827a7ec777 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:22:03 +0100 Subject: [SCSI] zfcp: Trace all triggers of error recovery activity This patch allows any recovery event to be traced back to an exact cause, e.g. a particular request identified by an id (address). Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_aux.c | 31 ++--- drivers/s390/scsi/zfcp_ccw.c | 12 +- drivers/s390/scsi/zfcp_dbf.c | 134 +++++++++++++++++++++ drivers/s390/scsi/zfcp_def.h | 14 +++ drivers/s390/scsi/zfcp_erp.c | 213 +++++++++++++++++---------------- drivers/s390/scsi/zfcp_ext.h | 25 ++-- drivers/s390/scsi/zfcp_fsf.c | 103 +++++++++------- drivers/s390/scsi/zfcp_qdio.c | 4 +- drivers/s390/scsi/zfcp_scsi.c | 2 +- drivers/s390/scsi/zfcp_sysfs_adapter.c | 6 +- drivers/s390/scsi/zfcp_sysfs_port.c | 6 +- drivers/s390/scsi/zfcp_sysfs_unit.c | 2 +- 12 files changed, 362 insertions(+), 190 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c index f3eff7ebcb6b..d2a744200c91 100644 --- a/drivers/s390/scsi/zfcp_aux.c +++ b/drivers/s390/scsi/zfcp_aux.c @@ -1326,10 +1326,10 @@ zfcp_nameserver_enqueue(struct zfcp_adapter *adapter) #define ZFCP_LOG_AREA ZFCP_LOG_AREA_FC -static void -zfcp_fsf_incoming_els_rscn(struct zfcp_adapter *adapter, - struct fsf_status_read_buffer *status_buffer) +static void zfcp_fsf_incoming_els_rscn(struct zfcp_fsf_req *fsf_req) { + struct fsf_status_read_buffer *status_buffer = (void*)fsf_req->data; + struct zfcp_adapter *adapter = fsf_req->adapter; struct fcp_rscn_head *fcp_rscn_head; struct fcp_rscn_element *fcp_rscn_element; struct zfcp_port *port; @@ -1376,7 +1376,8 @@ zfcp_fsf_incoming_els_rscn(struct zfcp_adapter *adapter, ZFCP_LOG_INFO("incoming RSCN, trying to open " "port 0x%016Lx\n", port->wwpn); zfcp_erp_port_reopen(port, - ZFCP_STATUS_COMMON_ERP_FAILED); + ZFCP_STATUS_COMMON_ERP_FAILED, + 82, (u64)fsf_req); continue; } @@ -1407,10 +1408,10 @@ zfcp_fsf_incoming_els_rscn(struct zfcp_adapter *adapter, } } -static void -zfcp_fsf_incoming_els_plogi(struct zfcp_adapter *adapter, - struct fsf_status_read_buffer *status_buffer) +static void zfcp_fsf_incoming_els_plogi(struct zfcp_fsf_req *fsf_req) { + struct fsf_status_read_buffer *status_buffer = (void*)fsf_req->data; + struct zfcp_adapter *adapter = fsf_req->adapter; struct fsf_plogi *els_plogi; struct zfcp_port *port; unsigned long flags; @@ -1429,14 +1430,14 @@ zfcp_fsf_incoming_els_plogi(struct zfcp_adapter *adapter, status_buffer->d_id, zfcp_get_busid_by_adapter(adapter)); } else { - zfcp_erp_port_forced_reopen(port, 0); + zfcp_erp_port_forced_reopen(port, 0, 83, (u64)fsf_req); } } -static void -zfcp_fsf_incoming_els_logo(struct zfcp_adapter *adapter, - struct fsf_status_read_buffer *status_buffer) +static void zfcp_fsf_incoming_els_logo(struct zfcp_fsf_req *fsf_req) { + struct fsf_status_read_buffer *status_buffer = (void*)fsf_req->data; + struct zfcp_adapter *adapter = fsf_req->adapter; struct fcp_logo *els_logo = (struct fcp_logo *) status_buffer->payload; struct zfcp_port *port; unsigned long flags; @@ -1454,7 +1455,7 @@ zfcp_fsf_incoming_els_logo(struct zfcp_adapter *adapter, status_buffer->d_id, zfcp_get_busid_by_adapter(adapter)); } else { - zfcp_erp_port_forced_reopen(port, 0); + zfcp_erp_port_forced_reopen(port, 0, 84, (u64)fsf_req); } } @@ -1481,12 +1482,12 @@ zfcp_fsf_incoming_els(struct zfcp_fsf_req *fsf_req) zfcp_san_dbf_event_incoming_els(fsf_req); if (els_type == LS_PLOGI) - zfcp_fsf_incoming_els_plogi(adapter, status_buffer); + zfcp_fsf_incoming_els_plogi(fsf_req); else if (els_type == LS_LOGO) - zfcp_fsf_incoming_els_logo(adapter, status_buffer); + zfcp_fsf_incoming_els_logo(fsf_req); else if ((els_type & 0xffff0000) == LS_RSCN) /* we are only concerned with the command, not the length */ - zfcp_fsf_incoming_els_rscn(adapter, status_buffer); + zfcp_fsf_incoming_els_rscn(fsf_req); else zfcp_fsf_incoming_els_unknown(adapter, status_buffer); } diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c index dd40b1c350b9..8edd90583cd4 100644 --- a/drivers/s390/scsi/zfcp_ccw.c +++ b/drivers/s390/scsi/zfcp_ccw.c @@ -172,7 +172,7 @@ zfcp_ccw_set_online(struct ccw_device *ccw_device) zfcp_erp_modify_adapter_status(adapter, 10, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 85, 0); zfcp_erp_wait(adapter); goto out; @@ -197,7 +197,7 @@ zfcp_ccw_set_offline(struct ccw_device *ccw_device) down(&zfcp_data.config_sema); adapter = dev_get_drvdata(&ccw_device->dev); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 86, 0); zfcp_erp_wait(adapter); zfcp_erp_thread_kill(adapter); up(&zfcp_data.config_sema); @@ -224,13 +224,13 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event) ZFCP_LOG_NORMAL("adapter %s: device gone\n", zfcp_get_busid_by_adapter(adapter)); debug_text_event(adapter->erp_dbf,1,"dev_gone"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 87, 0); break; case CIO_NO_PATH: ZFCP_LOG_NORMAL("adapter %s: no path\n", zfcp_get_busid_by_adapter(adapter)); debug_text_event(adapter->erp_dbf,1,"no_path"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 88, 0); break; case CIO_OPER: ZFCP_LOG_NORMAL("adapter %s: operational again\n", @@ -240,7 +240,7 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event) ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, - ZFCP_STATUS_COMMON_ERP_FAILED); + ZFCP_STATUS_COMMON_ERP_FAILED, 89, 0); break; } zfcp_erp_wait(adapter); @@ -272,7 +272,7 @@ zfcp_ccw_shutdown(struct ccw_device *cdev) down(&zfcp_data.config_sema); adapter = dev_get_drvdata(&cdev->dev); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 90, 0); zfcp_erp_wait(adapter); up(&zfcp_data.config_sema); } diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 2fcfe9bec554..f207b0bd0cad 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -523,6 +523,7 @@ static struct debug_view zfcp_hba_dbf_view = { static const char *zfcp_rec_dbf_tags[] = { [ZFCP_REC_DBF_ID_THREAD] = "thread", [ZFCP_REC_DBF_ID_TARGET] = "target", + [ZFCP_REC_DBF_ID_TRIGGER] = "trigger", }; static const char *zfcp_rec_dbf_ids[] = { @@ -587,6 +588,89 @@ static const char *zfcp_rec_dbf_ids[] = { [59] = "unit access denied open unit", [60] = "shared unit access denied open unit", [61] = "unit access denied fcp", + [62] = "request timeout", + [63] = "adisc link test reject or timeout", + [64] = "adisc link test d_id changed", + [65] = "adisc link test failed", + [66] = "recovery out of memory", + [67] = "adapter recovery repeated after state change", + [68] = "port recovery repeated after state change", + [69] = "unit recovery repeated after state change", + [70] = "port recovery follow-up after successful adapter recovery", + [71] = "adapter recovery escalation after failed adapter recovery", + [72] = "port recovery follow-up after successful physical port " + "recovery", + [73] = "adapter recovery escalation after failed physical port " + "recovery", + [74] = "unit recovery follow-up after successful port recovery", + [75] = "physical port recovery escalation after failed port " + "recovery", + [76] = "port recovery escalation after failed unit recovery", + [77] = "recovery opening nameserver port", + [78] = "duplicate request id", + [79] = "link down", + [80] = "exclusive read-only unit access unsupported", + [81] = "shared read-write unit access unsupported", + [82] = "incoming rscn", + [83] = "incoming plogi", + [84] = "incoming logo", + [85] = "online", + [86] = "offline", + [87] = "ccw device gone", + [88] = "ccw device no path", + [89] = "ccw device operational", + [90] = "ccw device shutdown", + [91] = "sysfs port addition", + [92] = "sysfs port removal", + [93] = "sysfs adapter recovery", + [94] = "sysfs unit addition", + [95] = "sysfs unit removal", + [96] = "sysfs port recovery", + [97] = "sysfs unit recovery", + [98] = "sequence number mismatch", + [99] = "link up", + [100] = "error state", + [101] = "status read physical port closed", + [102] = "link up status read", + [103] = "too many failed status read buffers", + [104] = "port handle not valid abort", + [105] = "lun handle not valid abort", + [106] = "port handle not valid ct", + [107] = "port handle not valid close port", + [108] = "port handle not valid close physical port", + [109] = "port handle not valid open unit", + [110] = "port handle not valid close unit", + [111] = "lun handle not valid close unit", + [112] = "port handle not valid fcp", + [113] = "lun handle not valid fcp", + [114] = "handle mismatch fcp", + [115] = "lun not valid fcp", + [116] = "qdio send failed", + [117] = "version mismatch", + [118] = "incompatible qtcb type", + [119] = "unknown protocol status", + [120] = "unknown fsf command", + [121] = "no recommendation for status qualifier", + [122] = "status read physical port closed in error", + [123] = "fc service class not supported ct", + [124] = "fc service class not supported els", + [125] = "need newer zfcp", + [126] = "need newer microcode", + [127] = "arbitrated loop not supported", + [128] = "unknown topology", + [129] = "qtcb size mismatch", + [130] = "unknown fsf status ecd", + [131] = "fcp request too big", + [132] = "fc service class not supported fcp", + [133] = "data direction not valid fcp", + [134] = "command length not valid fcp", + [135] = "status read act update", + [136] = "status read cfdc update", + [137] = "hbaapi port open", + [138] = "hbaapi unit open", + [139] = "hbaapi unit shutdown", + [140] = "qdio error", + [141] = "scsi host reset", }; static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, @@ -613,6 +697,17 @@ static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->u.target.wwpn); zfcp_dbf_out(&p, "fcp_lun", "0x%016Lx", r->u.target.fcp_lun); break; + case ZFCP_REC_DBF_ID_TRIGGER: + zfcp_dbf_out(&p, "reference", "0x%016Lx", r->u.trigger.ref); + zfcp_dbf_out(&p, "erp_action", "0x%016Lx", r->u.trigger.action); + zfcp_dbf_out(&p, "requested", "%d", r->u.trigger.want); + zfcp_dbf_out(&p, "executed", "%d", r->u.trigger.need); + zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->u.trigger.wwpn); + zfcp_dbf_out(&p, "fcp_lun", "0x%016Lx", r->u.trigger.fcp_lun); + zfcp_dbf_out(&p, "adapter_status", "0x%08x", r->u.trigger.as); + zfcp_dbf_out(&p, "port_status", "0x%08x", r->u.trigger.ps); + zfcp_dbf_out(&p, "unit_status", "0x%08x", r->u.trigger.us); + break; } sprintf(p, "\n"); return (p - buf) + 1; @@ -727,6 +822,45 @@ void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit) unit->fcp_lun); } +/** + * zfcp_rec_dbf_event_trigger - trace event for triggered error recovery + * @id2: identifier for error recovery trigger + * @ref: additional reference (e.g. request) + * @want: originally requested error recovery action + * @need: error recovery action actually initiated + * @action: address of error recovery action struct + * @adapter: adapter + * @port: port + * @unit: unit + */ +void zfcp_rec_dbf_event_trigger(u8 id2, u64 ref, u8 want, u8 need, u64 action, + struct zfcp_adapter *adapter, + struct zfcp_port *port, struct zfcp_unit *unit) +{ + struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf; + unsigned long flags; + + spin_lock_irqsave(&adapter->rec_dbf_lock, flags); + memset(r, 0, sizeof(*r)); + r->id = ZFCP_REC_DBF_ID_TRIGGER; + r->id2 = id2; + r->u.trigger.ref = ref; + r->u.trigger.want = want; + r->u.trigger.need = need; + r->u.trigger.action = action; + r->u.trigger.as = atomic_read(&adapter->status); + if (port) { + r->u.trigger.ps = atomic_read(&port->status); + r->u.trigger.wwpn = port->wwpn; + } + if (unit) { + r->u.trigger.us = atomic_read(&unit->status); + r->u.trigger.fcp_lun = unit->fcp_lun; + } + debug_event(adapter->rec_dbf, action ? 1 : 4, r, sizeof(*r)); + spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); +} + static void _zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, u32 s_id, u32 d_id, void *buffer, int buflen) diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 659ec739706a..aad75cfa1c60 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -295,18 +295,32 @@ struct zfcp_rec_dbf_record_target { u32 erp_count; } __attribute__ ((packed)); +struct zfcp_rec_dbf_record_trigger { + u8 want; + u8 need; + u32 as; + u32 ps; + u32 us; + u64 ref; + u64 action; + u64 wwpn; + u64 fcp_lun; +} __attribute__ ((packed)); + struct zfcp_rec_dbf_record { u8 id; u8 id2; union { struct zfcp_rec_dbf_record_thread thread; struct zfcp_rec_dbf_record_target target; + struct zfcp_rec_dbf_record_trigger trigger; } u; } __attribute__ ((packed)); enum { ZFCP_REC_DBF_ID_THREAD, ZFCP_REC_DBF_ID_TARGET, + ZFCP_REC_DBF_ID_TRIGGER, }; struct zfcp_hba_dbf_record_response { diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index bee03443efd4..55e034b10ded 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -26,13 +26,16 @@ static int zfcp_erp_adisc(struct zfcp_port *); static void zfcp_erp_adisc_handler(unsigned long); -static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *, int); -static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *, int); -static int zfcp_erp_port_reopen_internal(struct zfcp_port *, int); -static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *, int); +static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *, int, u8, + u64); +static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *, int, u8, + u64); +static int zfcp_erp_port_reopen_internal(struct zfcp_port *, int, u8, u64); +static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *, int, u8, u64); -static int zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *, int); -static int zfcp_erp_unit_reopen_all_internal(struct zfcp_port *, int); +static int zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *, int, u8, + u64); +static int zfcp_erp_unit_reopen_all_internal(struct zfcp_port *, int, u8, u64); static void zfcp_erp_adapter_block(struct zfcp_adapter *, int); static void zfcp_erp_adapter_unblock(struct zfcp_adapter *); @@ -97,7 +100,8 @@ static void zfcp_erp_action_dismiss_unit(struct zfcp_unit *); static void zfcp_erp_action_dismiss(struct zfcp_erp_action *); static int zfcp_erp_action_enqueue(int, struct zfcp_adapter *, - struct zfcp_port *, struct zfcp_unit *); + struct zfcp_port *, struct zfcp_unit *, + u8 id, u64 ref); static int zfcp_erp_action_dequeue(struct zfcp_erp_action *); static void zfcp_erp_action_cleanup(int, struct zfcp_adapter *, struct zfcp_port *, struct zfcp_unit *, @@ -179,7 +183,7 @@ static void zfcp_close_fsf(struct zfcp_adapter *adapter) static void zfcp_fsf_request_timeout_handler(unsigned long data) { struct zfcp_adapter *adapter = (struct zfcp_adapter *) data; - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 62, 0); } void zfcp_fsf_start_timer(struct zfcp_fsf_req *fsf_req, unsigned long timeout) @@ -200,8 +204,8 @@ void zfcp_fsf_start_timer(struct zfcp_fsf_req *fsf_req, unsigned long timeout) * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -static int -zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, int clear_mask) +static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, + int clear_mask, u8 id, u64 ref) { int retval; @@ -221,7 +225,7 @@ zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, int clear_mask) goto out; } retval = zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER, - adapter, NULL, NULL); + adapter, NULL, NULL, id, ref); out: return retval; @@ -236,56 +240,56 @@ zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, int clear_mask) * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -int -zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear_mask) +int zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear_mask, + u8 id, u64 ref) { int retval; unsigned long flags; read_lock_irqsave(&zfcp_data.config_lock, flags); write_lock(&adapter->erp_lock); - retval = zfcp_erp_adapter_reopen_internal(adapter, clear_mask); + retval = zfcp_erp_adapter_reopen_internal(adapter, clear_mask, id, ref); write_unlock(&adapter->erp_lock); read_unlock_irqrestore(&zfcp_data.config_lock, flags); return retval; } -int -zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear_mask) +int zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear_mask, + u8 id, u64 ref) { int retval; retval = zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED | - clear_mask); + clear_mask, id, ref); return retval; } -int -zfcp_erp_port_shutdown(struct zfcp_port *port, int clear_mask) +int zfcp_erp_port_shutdown(struct zfcp_port *port, int clear_mask, u8 id, + u64 ref) { int retval; retval = zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED | - clear_mask); + clear_mask, id, ref); return retval; } -int -zfcp_erp_unit_shutdown(struct zfcp_unit *unit, int clear_mask) +int zfcp_erp_unit_shutdown(struct zfcp_unit *unit, int clear_mask, u8 id, + u64 ref) { int retval; retval = zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED | - clear_mask); + clear_mask, id, ref); return retval; } @@ -400,7 +404,7 @@ zfcp_erp_adisc_handler(unsigned long data) "(adapter %s, port d_id=0x%06x)\n", zfcp_get_busid_by_adapter(adapter), d_id); debug_text_event(adapter->erp_dbf, 3, "forcreop"); - if (zfcp_erp_port_forced_reopen(port, 0)) + if (zfcp_erp_port_forced_reopen(port, 0, 63, 0)) ZFCP_LOG_NORMAL("failed reopen of port " "(adapter %s, wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), @@ -427,7 +431,7 @@ zfcp_erp_adisc_handler(unsigned long data) "adisc_resp_wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), port->wwpn, (wwn_t) adisc->wwpn); - if (zfcp_erp_port_reopen(port, 0)) + if (zfcp_erp_port_reopen(port, 0, 64, 0)) ZFCP_LOG_NORMAL("failed reopen of port " "(adapter %s, wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), @@ -461,7 +465,7 @@ zfcp_test_link(struct zfcp_port *port) ZFCP_LOG_NORMAL("reopen needed for port 0x%016Lx " "on adapter %s\n ", port->wwpn, zfcp_get_busid_by_port(port)); - retval = zfcp_erp_port_forced_reopen(port, 0); + retval = zfcp_erp_port_forced_reopen(port, 0, 65, 0); if (retval != 0) { ZFCP_LOG_NORMAL("reopen of remote port 0x%016Lx " "on adapter %s failed\n", port->wwpn, @@ -484,8 +488,8 @@ zfcp_test_link(struct zfcp_port *port) * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -static int -zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, int clear_mask) +static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, + int clear_mask, u8 id, u64 ref) { int retval; struct zfcp_adapter *adapter = port->adapter; @@ -509,7 +513,7 @@ zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, int clear_mask) } retval = zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT_FORCED, - port->adapter, port, NULL); + port->adapter, port, NULL, id, ref); out: return retval; @@ -524,8 +528,8 @@ zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, int clear_mask) * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -int -zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear_mask) +int zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear_mask, u8 id, + u64 ref) { int retval; unsigned long flags; @@ -534,7 +538,8 @@ zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear_mask) adapter = port->adapter; read_lock_irqsave(&zfcp_data.config_lock, flags); write_lock(&adapter->erp_lock); - retval = zfcp_erp_port_forced_reopen_internal(port, clear_mask); + retval = zfcp_erp_port_forced_reopen_internal(port, clear_mask, id, + ref); write_unlock(&adapter->erp_lock); read_unlock_irqrestore(&zfcp_data.config_lock, flags); @@ -551,8 +556,8 @@ zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear_mask) * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -static int -zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask) +static int zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask, + u8 id, u64 ref) { int retval; struct zfcp_adapter *adapter = port->adapter; @@ -578,7 +583,7 @@ zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask) } retval = zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_PORT, - port->adapter, port, NULL); + port->adapter, port, NULL, id, ref); out: return retval; @@ -594,8 +599,7 @@ zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask) * correct locking. An error recovery task is initiated to do the reopen. * To wait for the completion of the reopen zfcp_erp_wait should be used. */ -int -zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask) +int zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask, u8 id, u64 ref) { int retval; unsigned long flags; @@ -603,7 +607,7 @@ zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask) read_lock_irqsave(&zfcp_data.config_lock, flags); write_lock(&adapter->erp_lock); - retval = zfcp_erp_port_reopen_internal(port, clear_mask); + retval = zfcp_erp_port_reopen_internal(port, clear_mask, id, ref); write_unlock(&adapter->erp_lock); read_unlock_irqrestore(&zfcp_data.config_lock, flags); @@ -620,8 +624,8 @@ zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask) * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -static int -zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask) +static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask, + u8 id, u64 ref) { int retval; struct zfcp_adapter *adapter = unit->port->adapter; @@ -647,7 +651,7 @@ zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask) } retval = zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_UNIT, - unit->port->adapter, unit->port, unit); + adapter, unit->port, unit, id, ref); out: return retval; } @@ -662,8 +666,7 @@ zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask) * locking. An error recovery task is initiated to do the reopen. * To wait for the completion of the reopen zfcp_erp_wait should be used. */ -int -zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask) +int zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask, u8 id, u64 ref) { int retval; unsigned long flags; @@ -675,7 +678,7 @@ zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask) read_lock_irqsave(&zfcp_data.config_lock, flags); write_lock(&adapter->erp_lock); - retval = zfcp_erp_unit_reopen_internal(unit, clear_mask); + retval = zfcp_erp_unit_reopen_internal(unit, clear_mask, id, ref); write_unlock(&adapter->erp_lock); read_unlock_irqrestore(&zfcp_data.config_lock, flags); @@ -1215,7 +1218,7 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) "restarting I/O on adapter %s " "to free mempool\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_reopen_internal(adapter, 0); + zfcp_erp_adapter_reopen_internal(adapter, 0, 66, 0); } else { debug_text_event(adapter->erp_dbf, 2, "a_st_memw"); retval = zfcp_erp_strategy_memwait(erp_action); @@ -1499,7 +1502,9 @@ zfcp_erp_strategy_statechange(int action, case ZFCP_ERP_ACTION_REOPEN_ADAPTER: if (zfcp_erp_strategy_statechange_detected(&adapter->status, status)) { - zfcp_erp_adapter_reopen_internal(adapter, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_adapter_reopen_internal(adapter, + ZFCP_STATUS_COMMON_ERP_FAILED, + 67, 0); retval = ZFCP_ERP_EXIT; } break; @@ -1508,7 +1513,9 @@ zfcp_erp_strategy_statechange(int action, case ZFCP_ERP_ACTION_REOPEN_PORT: if (zfcp_erp_strategy_statechange_detected(&port->status, status)) { - zfcp_erp_port_reopen_internal(port, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_port_reopen_internal(port, + ZFCP_STATUS_COMMON_ERP_FAILED, + 68, 0); retval = ZFCP_ERP_EXIT; } break; @@ -1516,7 +1523,9 @@ zfcp_erp_strategy_statechange(int action, case ZFCP_ERP_ACTION_REOPEN_UNIT: if (zfcp_erp_strategy_statechange_detected(&unit->status, status)) { - zfcp_erp_unit_reopen_internal(unit, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_unit_reopen_internal(unit, + ZFCP_STATUS_COMMON_ERP_FAILED, + 69, 0); retval = ZFCP_ERP_EXIT; } break; @@ -1700,29 +1709,29 @@ zfcp_erp_strategy_followup_actions(int action, case ZFCP_ERP_ACTION_REOPEN_ADAPTER: if (status == ZFCP_ERP_SUCCEEDED) - zfcp_erp_port_reopen_all_internal(adapter, 0); + zfcp_erp_port_reopen_all_internal(adapter, 0, 70, 0); else - zfcp_erp_adapter_reopen_internal(adapter, 0); + zfcp_erp_adapter_reopen_internal(adapter, 0, 71, 0); break; case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: if (status == ZFCP_ERP_SUCCEEDED) - zfcp_erp_port_reopen_internal(port, 0); + zfcp_erp_port_reopen_internal(port, 0, 72, 0); else - zfcp_erp_adapter_reopen_internal(adapter, 0); + zfcp_erp_adapter_reopen_internal(adapter, 0, 73, 0); break; case ZFCP_ERP_ACTION_REOPEN_PORT: if (status == ZFCP_ERP_SUCCEEDED) - zfcp_erp_unit_reopen_all_internal(port, 0); + zfcp_erp_unit_reopen_all_internal(port, 0, 74, 0); else - zfcp_erp_port_forced_reopen_internal(port, 0); + zfcp_erp_port_forced_reopen_internal(port, 0, 75, 0); break; case ZFCP_ERP_ACTION_REOPEN_UNIT: /* Nothing to do if status == ZFCP_ERP_SUCCEEDED */ if (status != ZFCP_ERP_SUCCEEDED) - zfcp_erp_port_reopen_internal(unit->port, 0); + zfcp_erp_port_reopen_internal(unit->port, 0, 76, 0); break; } @@ -1863,30 +1872,32 @@ void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, u64 ref, * returns: 0 - initiated action successfully * <0 - failed to initiate action */ -int -zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter, int clear_mask) +int zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter, int clear_mask, + u8 id, u64 ref) { int retval; unsigned long flags; read_lock_irqsave(&zfcp_data.config_lock, flags); write_lock(&adapter->erp_lock); - retval = zfcp_erp_port_reopen_all_internal(adapter, clear_mask); + retval = zfcp_erp_port_reopen_all_internal(adapter, clear_mask, id, + ref); write_unlock(&adapter->erp_lock); read_unlock_irqrestore(&zfcp_data.config_lock, flags); return retval; } -static int -zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *adapter, int clear_mask) +static int zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *adapter, + int clear_mask, u8 id, u64 ref) { int retval = 0; struct zfcp_port *port; list_for_each_entry(port, &adapter->port_list_head, list) if (!atomic_test_mask(ZFCP_STATUS_PORT_WKA, &port->status)) - zfcp_erp_port_reopen_internal(port, clear_mask); + zfcp_erp_port_reopen_internal(port, clear_mask, id, + ref); return retval; } @@ -1898,14 +1909,14 @@ zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *adapter, int clear_mask) * * returns: FIXME */ -static int -zfcp_erp_unit_reopen_all_internal(struct zfcp_port *port, int clear_mask) +static int zfcp_erp_unit_reopen_all_internal(struct zfcp_port *port, + int clear_mask, u8 id, u64 ref) { int retval = 0; struct zfcp_unit *unit; list_for_each_entry(unit, &port->unit_list_head, list) - zfcp_erp_unit_reopen_internal(unit, clear_mask); + zfcp_erp_unit_reopen_internal(unit, clear_mask, id, ref); return retval; } @@ -2466,8 +2477,8 @@ zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *erp_action) /* nameserver port may live again */ atomic_set_mask(ZFCP_STATUS_COMMON_RUNNING, &adapter->nameserver_port->status); - if (zfcp_erp_port_reopen(adapter->nameserver_port, 0) - >= 0) { + if (zfcp_erp_port_reopen(adapter->nameserver_port, 0, + 77, (u64)erp_action) >= 0) { erp_action->step = ZFCP_ERP_STEP_NAMESERVER_OPEN; retval = ZFCP_ERP_CONTINUES; @@ -2963,14 +2974,12 @@ void zfcp_erp_start_timer(struct zfcp_fsf_req *fsf_req) * * returns: */ -static int -zfcp_erp_action_enqueue(int action, - struct zfcp_adapter *adapter, - struct zfcp_port *port, struct zfcp_unit *unit) +static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, + struct zfcp_port *port, + struct zfcp_unit *unit, u8 id, u64 ref) { - int retval = 1; + int retval = 1, need = want; struct zfcp_erp_action *erp_action = NULL; - int stronger_action = 0; u32 status = 0; /* @@ -2989,9 +2998,9 @@ zfcp_erp_action_enqueue(int action, &adapter->status)) return -EIO; - debug_event(adapter->erp_dbf, 4, &action, sizeof (int)); + debug_event(adapter->erp_dbf, 4, &want, sizeof (int)); /* check whether we really need this */ - switch (action) { + switch (want) { case ZFCP_ERP_ACTION_REOPEN_UNIT: if (atomic_test_mask (ZFCP_STATUS_COMMON_ERP_INUSE, &unit->status)) { @@ -3009,10 +3018,8 @@ zfcp_erp_action_enqueue(int action, goto out; } if (!atomic_test_mask - (ZFCP_STATUS_COMMON_UNBLOCKED, &port->status)) { - stronger_action = ZFCP_ERP_ACTION_REOPEN_PORT; - unit = NULL; - } + (ZFCP_STATUS_COMMON_UNBLOCKED, &port->status)) + need = ZFCP_ERP_ACTION_REOPEN_PORT; /* fall through !!! */ case ZFCP_ERP_ACTION_REOPEN_PORT: @@ -3032,7 +3039,7 @@ zfcp_erp_action_enqueue(int action, ZFCP_ERP_ACTION_REOPEN_PORT_FORCED) { ZFCP_LOG_INFO("dropped erp action %i (port " "0x%016Lx, action in use: %i)\n", - action, port->wwpn, + want, port->wwpn, port->erp_action.action); debug_text_event(adapter->erp_dbf, 4, "pf_actenq_drp"); @@ -3050,10 +3057,8 @@ zfcp_erp_action_enqueue(int action, goto out; } if (!atomic_test_mask - (ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status)) { - stronger_action = ZFCP_ERP_ACTION_REOPEN_ADAPTER; - port = NULL; - } + (ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status)) + need = ZFCP_ERP_ACTION_REOPEN_ADAPTER; /* fall through !!! */ case ZFCP_ERP_ACTION_REOPEN_ADAPTER: @@ -3066,30 +3071,28 @@ zfcp_erp_action_enqueue(int action, default: debug_text_exception(adapter->erp_dbf, 1, "a_actenq_bug"); - debug_event(adapter->erp_dbf, 1, &action, sizeof (int)); + debug_event(adapter->erp_dbf, 1, &want, sizeof (int)); ZFCP_LOG_NORMAL("bug: unknown erp action requested " "on adapter %s (action=%d)\n", - zfcp_get_busid_by_adapter(adapter), action); + zfcp_get_busid_by_adapter(adapter), want); goto out; } /* check whether we need something stronger first */ - if (stronger_action) { + if (need) { debug_text_event(adapter->erp_dbf, 4, "a_actenq_str"); - debug_event(adapter->erp_dbf, 4, &stronger_action, + debug_event(adapter->erp_dbf, 4, &need, sizeof (int)); ZFCP_LOG_DEBUG("stronger erp action %d needed before " "erp action %d on adapter %s\n", - stronger_action, action, - zfcp_get_busid_by_adapter(adapter)); - action = stronger_action; + need, want, zfcp_get_busid_by_adapter(adapter)); } /* mark adapter to have some error recovery pending */ atomic_set_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status); /* setup error recovery action */ - switch (action) { + switch (need) { case ZFCP_ERP_ACTION_REOPEN_UNIT: zfcp_unit_get(unit); @@ -3128,7 +3131,7 @@ zfcp_erp_action_enqueue(int action, erp_action->adapter = adapter; erp_action->port = port; erp_action->unit = unit; - erp_action->action = action; + erp_action->action = need; erp_action->status = status; ++adapter->erp_total_count; @@ -3139,6 +3142,8 @@ zfcp_erp_action_enqueue(int action, zfcp_rec_dbf_event_thread(1, adapter, 0); retval = 0; out: + zfcp_rec_dbf_event_trigger(id, ref, want, need, (u64)erp_action, + adapter, port, unit); return retval; } @@ -3322,7 +3327,7 @@ void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) zfcp_erp_modify_port_status(port, id, ref, ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); read_unlock_irqrestore(&zfcp_data.config_lock, flags); - zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref); } void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, u64 ref) @@ -3333,7 +3338,7 @@ void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, u64 ref) debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof(fcp_lun_t)); zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); - zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref); } void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, u64 ref) @@ -3361,8 +3366,8 @@ void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, u64 ref) ZFCP_STATUS_COMMON_ACCESS_DENIED, ZFCP_SET); } -void -zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter) +void zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter, u8 id, + u64 ref) { struct zfcp_port *port; unsigned long flags; @@ -3375,15 +3380,14 @@ zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter) read_lock_irqsave(&zfcp_data.config_lock, flags); if (adapter->nameserver_port) - zfcp_erp_port_access_changed(adapter->nameserver_port); + zfcp_erp_port_access_changed(adapter->nameserver_port, id, ref); list_for_each_entry(port, &adapter->port_list_head, list) if (port != adapter->nameserver_port) - zfcp_erp_port_access_changed(port); + zfcp_erp_port_access_changed(port, id, ref); read_unlock_irqrestore(&zfcp_data.config_lock, flags); } -void -zfcp_erp_port_access_changed(struct zfcp_port *port) +void zfcp_erp_port_access_changed(struct zfcp_port *port, u8 id, u64 ref) { struct zfcp_adapter *adapter = port->adapter; struct zfcp_unit *unit; @@ -3397,21 +3401,20 @@ zfcp_erp_port_access_changed(struct zfcp_port *port) &port->status)) { if (!atomic_test_mask(ZFCP_STATUS_PORT_WKA, &port->status)) list_for_each_entry(unit, &port->unit_list_head, list) - zfcp_erp_unit_access_changed(unit); + zfcp_erp_unit_access_changed(unit, id, ref); return; } ZFCP_LOG_NORMAL("reopen of port 0x%016Lx on adapter %s " "(due to ACT update)\n", port->wwpn, zfcp_get_busid_by_adapter(adapter)); - if (zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED) != 0) + if (zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref)) ZFCP_LOG_NORMAL("failed reopen of port" "(adapter %s, wwpn=0x%016Lx)\n", zfcp_get_busid_by_adapter(adapter), port->wwpn); } -void -zfcp_erp_unit_access_changed(struct zfcp_unit *unit) +void zfcp_erp_unit_access_changed(struct zfcp_unit *unit, u8 id, u64 ref) { struct zfcp_adapter *adapter = unit->port->adapter; @@ -3428,7 +3431,7 @@ zfcp_erp_unit_access_changed(struct zfcp_unit *unit) " on adapter %s (due to ACT update)\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_adapter(adapter)); - if (zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED) != 0) + if (zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref)) ZFCP_LOG_NORMAL("failed reopen of unit (adapter %s, " "wwpn=0x%016Lx, fcp_lun=0x%016Lx)\n", zfcp_get_busid_by_adapter(adapter), diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h index 20ad6fde2e22..e10ed145baa7 100644 --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -133,20 +133,20 @@ extern struct fc_function_template zfcp_transport_functions; /******************************** ERP ****************************************/ extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, u8, u64, u32, int); -extern int zfcp_erp_adapter_reopen(struct zfcp_adapter *, int); -extern int zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int); +extern int zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, u8, u64); +extern int zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, u8, u64); extern void zfcp_erp_adapter_failed(struct zfcp_adapter *, u8, u64); extern void zfcp_erp_modify_port_status(struct zfcp_port *, u8, u64, u32, int); -extern int zfcp_erp_port_reopen(struct zfcp_port *, int); -extern int zfcp_erp_port_shutdown(struct zfcp_port *, int); -extern int zfcp_erp_port_forced_reopen(struct zfcp_port *, int); +extern int zfcp_erp_port_reopen(struct zfcp_port *, int, u8, u64); +extern int zfcp_erp_port_shutdown(struct zfcp_port *, int, u8, u64); +extern int zfcp_erp_port_forced_reopen(struct zfcp_port *, int, u8, u64); extern void zfcp_erp_port_failed(struct zfcp_port *, u8, u64); -extern int zfcp_erp_port_reopen_all(struct zfcp_adapter *, int); +extern int zfcp_erp_port_reopen_all(struct zfcp_adapter *, int, u8, u64); extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, u8, u64, u32, int); -extern int zfcp_erp_unit_reopen(struct zfcp_unit *, int); -extern int zfcp_erp_unit_shutdown(struct zfcp_unit *, int); +extern int zfcp_erp_unit_reopen(struct zfcp_unit *, int, u8, u64); +extern int zfcp_erp_unit_shutdown(struct zfcp_unit *, int, u8, u64); extern void zfcp_erp_unit_failed(struct zfcp_unit *, u8, u64); extern int zfcp_erp_thread_setup(struct zfcp_adapter *); @@ -160,9 +160,9 @@ extern void zfcp_erp_port_boxed(struct zfcp_port *, u8 id, u64 ref); extern void zfcp_erp_unit_boxed(struct zfcp_unit *, u8 id, u64 ref); extern void zfcp_erp_port_access_denied(struct zfcp_port *, u8 id, u64 ref); extern void zfcp_erp_unit_access_denied(struct zfcp_unit *, u8 id, u64 ref); -extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *); -extern void zfcp_erp_port_access_changed(struct zfcp_port *); -extern void zfcp_erp_unit_access_changed(struct zfcp_unit *); +extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *, u8, u64); +extern void zfcp_erp_port_access_changed(struct zfcp_port *, u8, u64); +extern void zfcp_erp_unit_access_changed(struct zfcp_unit *, u8, u64); /******************************** AUX ****************************************/ extern void zfcp_rec_dbf_event_thread(u8 id, struct zfcp_adapter *adapter, @@ -170,6 +170,9 @@ extern void zfcp_rec_dbf_event_thread(u8 id, struct zfcp_adapter *adapter, extern void zfcp_rec_dbf_event_adapter(u8 id, u64 ref, struct zfcp_adapter *); extern void zfcp_rec_dbf_event_port(u8 id, u64 ref, struct zfcp_port *port); extern void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit); +extern void zfcp_rec_dbf_event_trigger(u8 id, u64 ref, u8 want, u8 need, + u64 action, struct zfcp_adapter *, + struct zfcp_port *, struct zfcp_unit *); extern void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *); extern void zfcp_hba_dbf_event_fsf_unsol(const char *, struct zfcp_adapter *, diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 2b7ddb78767c..ffdf99736a4c 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -298,7 +298,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_adapter(adapter), prot_status_qual->version_error.fsf_version, ZFCP_QTCB_VERSION); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 117, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -309,7 +309,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) qtcb->prefix.req_seq_no, zfcp_get_busid_by_adapter(adapter), prot_status_qual->sequence_error.exp_req_seq_no); - zfcp_erp_adapter_reopen(adapter, 0); + zfcp_erp_adapter_reopen(adapter, 0, 98, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_RETRY; fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -320,7 +320,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "that used on adapter %s. " "Stopping all operations on this adapter.\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 118, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -337,14 +337,15 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) *(unsigned long long*) (&qtcb->bottom.support.req_handle), zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 78, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_PROT_LINK_DOWN: zfcp_fsf_link_down_info_eval(fsf_req, 37, &prot_status_qual->link_down_info); - zfcp_erp_adapter_reopen(adapter, 0); + /* FIXME: reopening adapter now? better wait for link up */ + zfcp_erp_adapter_reopen(adapter, 0, 79, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -359,7 +360,8 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED - | ZFCP_STATUS_COMMON_ERP_FAILED); + | ZFCP_STATUS_COMMON_ERP_FAILED, + 99, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -369,7 +371,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "Restarting all operations on this " "adapter.\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_reopen(adapter, 0); + zfcp_erp_adapter_reopen(adapter, 0, 100, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_RETRY; fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -382,7 +384,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "(debug info 0x%x).\n", zfcp_get_busid_by_adapter(adapter), qtcb->prefix.prot_status); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 119, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; } @@ -421,7 +423,8 @@ zfcp_fsf_fsfstatus_eval(struct zfcp_fsf_req *fsf_req) "(debug info 0x%x).\n", zfcp_get_busid_by_adapter(fsf_req->adapter), fsf_req->qtcb->header.fsf_command); - zfcp_erp_adapter_shutdown(fsf_req->adapter, 0); + zfcp_erp_adapter_shutdown(fsf_req->adapter, 0, 120, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -475,7 +478,8 @@ zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *fsf_req) "problem on the adapter %s " "Stopping all operations on this adapter. ", zfcp_get_busid_by_adapter(fsf_req->adapter)); - zfcp_erp_adapter_shutdown(fsf_req->adapter, 0); + zfcp_erp_adapter_shutdown(fsf_req->adapter, 0, 121, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_PROGRAMMING_ERROR: @@ -796,12 +800,12 @@ zfcp_fsf_status_read_port_closed(struct zfcp_fsf_req *fsf_req) case FSF_STATUS_READ_SUB_CLOSE_PHYS_PORT: debug_text_event(adapter->erp_dbf, 3, "unsol_pc_phys:"); - zfcp_erp_port_reopen(port, 0); + zfcp_erp_port_reopen(port, 0, 101, (u64)fsf_req); break; case FSF_STATUS_READ_SUB_ERROR_PORT: debug_text_event(adapter->erp_dbf, 1, "unsol_pc_err:"); - zfcp_erp_port_shutdown(port, 0); + zfcp_erp_port_shutdown(port, 0, 122, (u64)fsf_req); break; default: @@ -935,7 +939,8 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED - | ZFCP_STATUS_COMMON_ERP_FAILED); + | ZFCP_STATUS_COMMON_ERP_FAILED, + 102, (u64)fsf_req); break; case FSF_STATUS_READ_NOTIFICATION_LOST: @@ -969,13 +974,14 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) if (status_buffer->status_subtype & FSF_STATUS_READ_SUB_ACT_UPDATED) - zfcp_erp_adapter_access_changed(adapter); + zfcp_erp_adapter_access_changed(adapter, 135, + (u64)fsf_req); break; case FSF_STATUS_READ_CFDC_UPDATED: ZFCP_LOG_NORMAL("CFDC has been updated on the adapter %s\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_access_changed(adapter); + zfcp_erp_adapter_access_changed(adapter, 136, (u64)fsf_req); break; case FSF_STATUS_READ_CFDC_HARDENED: @@ -1044,7 +1050,7 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_INFO("restart adapter %s due to status read " "buffer shortage\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_reopen(adapter, 0); + zfcp_erp_adapter_reopen(adapter, 0, 103, (u64)fsf_req); } } out: @@ -1167,7 +1173,8 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) /* Let's hope this sorts out the mess */ debug_text_event(new_fsf_req->adapter->erp_dbf, 1, "fsf_s_phand_nv1"); - zfcp_erp_adapter_reopen(unit->port->adapter, 0); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 104, + (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; } break; @@ -1199,7 +1206,8 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) /* Let's hope this sorts out the mess */ debug_text_event(new_fsf_req->adapter->erp_dbf, 1, "fsf_s_lhand_nv1"); - zfcp_erp_port_reopen(unit->port, 0); + zfcp_erp_port_reopen(unit->port, 0, 105, + (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; } break; @@ -1478,7 +1486,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ debug_text_exception(adapter->erp_dbf, 0, "fsf_s_class_nsup"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 123, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1547,7 +1555,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); debug_text_event(adapter->erp_dbf, 1, "fsf_s_phandle_nv"); - zfcp_erp_adapter_reopen(adapter, 0); + zfcp_erp_adapter_reopen(adapter, 0, 106, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1782,7 +1790,7 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ debug_text_exception(adapter->erp_dbf, 0, "fsf_s_class_nsup"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 124, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2104,7 +2112,7 @@ zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *fsf_req, int xchg_ok) "driver (try updated device driver)\n", zfcp_get_busid_by_adapter(adapter)); debug_text_event(adapter->erp_dbf, 0, "low_qtcb_ver"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 125, (u64)fsf_req); return -EIO; } if (ZFCP_QTCB_VERSION > bottom->high_qtcb_version) { @@ -2114,7 +2122,7 @@ zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *fsf_req, int xchg_ok) "(consider a microcode upgrade)\n", zfcp_get_busid_by_adapter(adapter)); debug_text_event(adapter->erp_dbf, 0, "high_qtcb_ver"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 126, (u64)fsf_req); return -EIO; } return 0; @@ -2164,7 +2172,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_adapter(adapter)); debug_text_event(fsf_req->adapter->erp_dbf, 0, "top-al"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 127, (u64)fsf_req); return -EIO; case FC_PORTTYPE_NPORT: ZFCP_LOG_NORMAL("Switched fabric fibrechannel " @@ -2181,7 +2189,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_adapter(adapter)); debug_text_exception(fsf_req->adapter->erp_dbf, 0, "unknown-topo"); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 128, (u64)fsf_req); return -EIO; } bottom = &qtcb->bottom.config; @@ -2197,7 +2205,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) "qtcb-size"); debug_event(fsf_req->adapter->erp_dbf, 0, &bottom->max_qtcb_size, sizeof (u32)); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 129, (u64)fsf_req); return -EIO; } atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, @@ -2219,7 +2227,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf-stat-ng"); debug_event(fsf_req->adapter->erp_dbf, 0, &fsf_req->qtcb->header.fsf_status, sizeof(u32)); - zfcp_erp_adapter_shutdown(adapter, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 130, (u64)fsf_req); return -EIO; } return 0; @@ -2760,7 +2768,7 @@ zfcp_fsf_close_port_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_phand_nv"); - zfcp_erp_adapter_reopen(port->adapter, 0); + zfcp_erp_adapter_reopen(port->adapter, 0, 107, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2903,7 +2911,7 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_phand_nv"); - zfcp_erp_adapter_reopen(port->adapter, 0); + zfcp_erp_adapter_reopen(port->adapter, 0, 108, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3128,7 +3136,8 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); debug_text_event(adapter->erp_dbf, 1, "fsf_s_ph_nv"); - zfcp_erp_adapter_reopen(unit->port->adapter, 0); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 109, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3311,13 +3320,15 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) "unit not supported\n"); zfcp_erp_unit_failed(unit, 35, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; - zfcp_erp_unit_shutdown(unit, 0); + zfcp_erp_unit_shutdown(unit, 0, 80, + (u64)fsf_req); } else if (!exclusive && readwrite) { ZFCP_LOG_NORMAL("shared access of read-write " "unit not supported\n"); zfcp_erp_unit_failed(unit, 36, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; - zfcp_erp_unit_shutdown(unit, 0); + zfcp_erp_unit_shutdown(unit, 0, 81, + (u64)fsf_req); } } @@ -3445,7 +3456,8 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_phand_nv"); - zfcp_erp_adapter_reopen(unit->port->adapter, 0); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 110, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3463,7 +3475,7 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_lhand_nv"); - zfcp_erp_port_reopen(unit->port, 0); + zfcp_erp_port_reopen(unit->port, 0, 111, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3681,7 +3693,7 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter, zfcp_get_busid_by_unit(unit), unit->port->wwpn, unit->fcp_lun); - zfcp_erp_unit_shutdown(unit, 0); + zfcp_erp_unit_shutdown(unit, 0, 131, (u64)fsf_req); retval = -EINVAL; } goto no_fit; @@ -3841,7 +3853,8 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_phand_nv"); - zfcp_erp_adapter_reopen(unit->port->adapter, 0); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 112, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3859,7 +3872,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_uhand_nv"); - zfcp_erp_port_reopen(unit->port, 0); + zfcp_erp_port_reopen(unit->port, 0, 113, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3877,7 +3890,8 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_hand_mis"); - zfcp_erp_adapter_reopen(unit->port->adapter, 0); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 114, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3889,7 +3903,8 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) /* stop operation for this adapter */ debug_text_exception(fsf_req->adapter->erp_dbf, 0, "fsf_s_class_nsup"); - zfcp_erp_adapter_shutdown(unit->port->adapter, 0); + zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 132, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3907,7 +3922,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) sizeof (union fsf_status_qual)); debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_fcp_lun_nv"); - zfcp_erp_port_reopen(unit->port, 0); + zfcp_erp_port_reopen(unit->port, 0, 115, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3945,7 +3960,8 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) /* stop operation for this adapter */ debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_dir_ind_nv"); - zfcp_erp_adapter_shutdown(unit->port->adapter, 0); + zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 133, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3960,7 +3976,8 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) /* stop operation for this adapter */ debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_cmd_len_nv"); - zfcp_erp_adapter_shutdown(unit->port->adapter, 0); + zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 134, + (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -4863,7 +4880,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *fsf_req) req_queue->free_index -= fsf_req->sbal_number; req_queue->free_index += QDIO_MAX_BUFFERS_PER_Q; req_queue->free_index %= QDIO_MAX_BUFFERS_PER_Q; /* wrap */ - zfcp_erp_adapter_reopen(adapter, 0); + zfcp_erp_adapter_reopen(adapter, 0, 116, (u64)fsf_req); } else { req_queue->distance_from_int = new_distance_from_int; /* diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c index 22fdc17e0d0e..d742d0a9df77 100644 --- a/drivers/s390/scsi/zfcp_qdio.c +++ b/drivers/s390/scsi/zfcp_qdio.c @@ -175,8 +175,8 @@ zfcp_qdio_handler_error_check(struct zfcp_adapter *adapter, unsigned int status, * which is set again in case we have missed by a mile. */ zfcp_erp_adapter_reopen(adapter, - ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | - ZFCP_STATUS_COMMON_ERP_FAILED); + ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | + ZFCP_STATUS_COMMON_ERP_FAILED, 140, 0); } return retval; } diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c index 1198f0b27e3a..cd844b2ad7a1 100644 --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -529,7 +529,7 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt) unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_adapter(unit->port->adapter)); - zfcp_erp_adapter_reopen(adapter, 0); + zfcp_erp_adapter_reopen(adapter, 0, 141, (u64)scpnt); zfcp_erp_wait(adapter); return SUCCESS; diff --git a/drivers/s390/scsi/zfcp_sysfs_adapter.c b/drivers/s390/scsi/zfcp_sysfs_adapter.c index ec340530c824..e0bbcc440a52 100644 --- a/drivers/s390/scsi/zfcp_sysfs_adapter.c +++ b/drivers/s390/scsi/zfcp_sysfs_adapter.c @@ -89,7 +89,7 @@ zfcp_sysfs_port_add_store(struct device *dev, struct device_attribute *attr, con retval = 0; - zfcp_erp_port_reopen(port, 0); + zfcp_erp_port_reopen(port, 0, 91, 0); zfcp_erp_wait(port->adapter); zfcp_port_put(port); out: @@ -147,7 +147,7 @@ zfcp_sysfs_port_remove_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_port_shutdown(port, 0); + zfcp_erp_port_shutdown(port, 0, 92, 0); zfcp_erp_wait(adapter); zfcp_port_put(port); zfcp_port_dequeue(port); @@ -193,7 +193,7 @@ zfcp_sysfs_adapter_failed_store(struct device *dev, struct device_attribute *att zfcp_erp_modify_adapter_status(adapter, 44, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 93, 0); zfcp_erp_wait(adapter); out: up(&zfcp_data.config_sema); diff --git a/drivers/s390/scsi/zfcp_sysfs_port.c b/drivers/s390/scsi/zfcp_sysfs_port.c index 9cc2bc5be724..538195034c68 100644 --- a/drivers/s390/scsi/zfcp_sysfs_port.c +++ b/drivers/s390/scsi/zfcp_sysfs_port.c @@ -94,7 +94,7 @@ zfcp_sysfs_unit_add_store(struct device *dev, struct device_attribute *attr, con retval = 0; - zfcp_erp_unit_reopen(unit, 0); + zfcp_erp_unit_reopen(unit, 0, 94, 0); zfcp_erp_wait(unit->port->adapter); zfcp_unit_put(unit); out: @@ -150,7 +150,7 @@ zfcp_sysfs_unit_remove_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_unit_shutdown(unit, 0); + zfcp_erp_unit_shutdown(unit, 0, 95, 0); zfcp_erp_wait(unit->port->adapter); zfcp_unit_put(unit); zfcp_unit_dequeue(unit); @@ -195,7 +195,7 @@ zfcp_sysfs_port_failed_store(struct device *dev, struct device_attribute *attr, zfcp_erp_modify_port_status(port, 45, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, 96, 0); zfcp_erp_wait(port->adapter); out: up(&zfcp_data.config_sema); diff --git a/drivers/s390/scsi/zfcp_sysfs_unit.c b/drivers/s390/scsi/zfcp_sysfs_unit.c index 52a5f6a25ff9..fd73568b44b4 100644 --- a/drivers/s390/scsi/zfcp_sysfs_unit.c +++ b/drivers/s390/scsi/zfcp_sysfs_unit.c @@ -96,7 +96,7 @@ zfcp_sysfs_unit_failed_store(struct device *dev, struct device_attribute *attr, zfcp_erp_modify_unit_status(unit, 46, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED); + zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, 97, 0); zfcp_erp_wait(unit->port->adapter); out: up(&zfcp_data.config_sema); -- cgit v1.2.3-59-g8ed1b From 6f4f365e9c5d721c4d03ee8009dd6fab47feb045 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:22:04 +0100 Subject: [SCSI] zfcp: Add trace records for recovery actions. This patch writes trace records for various phases of a recovery action: action being created, action being processed, action continueing asynchronously, action gone, action timed out, action dismissed etc. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 35 +++++++++++++++++++++++++++++++++++ drivers/s390/scsi/zfcp_def.h | 9 +++++++++ drivers/s390/scsi/zfcp_erp.c | 6 ++++++ drivers/s390/scsi/zfcp_ext.h | 1 + 4 files changed, 51 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index f207b0bd0cad..466a689c538f 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -524,6 +524,7 @@ static const char *zfcp_rec_dbf_tags[] = { [ZFCP_REC_DBF_ID_THREAD] = "thread", [ZFCP_REC_DBF_ID_TARGET] = "target", [ZFCP_REC_DBF_ID_TRIGGER] = "trigger", + [ZFCP_REC_DBF_ID_ACTION] = "action", }; static const char *zfcp_rec_dbf_ids[] = { @@ -671,6 +672,11 @@ static const char *zfcp_rec_dbf_ids[] = { [139] = "hbaapi unit shutdown", [140] = "qdio error", [141] = "scsi host reset", + [142] = "dismissing fsf request for recovery action", + [143] = "recovery action timed out", + [144] = "recovery action gone", + [145] = "recovery action being processed", + [146] = "recovery action ready for next step", }; static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, @@ -708,6 +714,12 @@ static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_out(&p, "port_status", "0x%08x", r->u.trigger.ps); zfcp_dbf_out(&p, "unit_status", "0x%08x", r->u.trigger.us); break; + case ZFCP_REC_DBF_ID_ACTION: + zfcp_dbf_out(&p, "erp_action", "0x%016Lx", r->u.action.action); + zfcp_dbf_out(&p, "fsf_req", "0x%016Lx", r->u.action.fsf_req); + zfcp_dbf_out(&p, "status", "0x%08Lx", r->u.action.status); + zfcp_dbf_out(&p, "step", "0x%08Lx", r->u.action.step); + break; } sprintf(p, "\n"); return (p - buf) + 1; @@ -861,6 +873,29 @@ void zfcp_rec_dbf_event_trigger(u8 id2, u64 ref, u8 want, u8 need, u64 action, spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); } +/** + * zfcp_rec_dbf_event_action - trace event showing progress of recovery action + * @id2: identifier + * @erp_action: error recovery action struct pointer + */ +void zfcp_rec_dbf_event_action(u8 id2, struct zfcp_erp_action *erp_action) +{ + struct zfcp_adapter *adapter = erp_action->adapter; + struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf; + unsigned long flags; + + spin_lock_irqsave(&adapter->rec_dbf_lock, flags); + memset(r, 0, sizeof(*r)); + r->id = ZFCP_REC_DBF_ID_ACTION; + r->id2 = id2; + r->u.action.action = (u64)erp_action; + r->u.action.status = erp_action->status; + r->u.action.step = erp_action->step; + r->u.action.fsf_req = (u64)erp_action->fsf_req; + debug_event(adapter->rec_dbf, 4, r, sizeof(*r)); + spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); +} + static void _zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, u32 s_id, u32 d_id, void *buffer, int buflen) diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index aad75cfa1c60..9a4d870a820a 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -307,10 +307,18 @@ struct zfcp_rec_dbf_record_trigger { u64 fcp_lun; } __attribute__ ((packed)); +struct zfcp_rec_dbf_record_action { + u32 status; + u32 step; + u64 action; + u64 fsf_req; +} __attribute__ ((packed)); + struct zfcp_rec_dbf_record { u8 id; u8 id2; union { + struct zfcp_rec_dbf_record_action action; struct zfcp_rec_dbf_record_thread thread; struct zfcp_rec_dbf_record_target target; struct zfcp_rec_dbf_record_trigger trigger; @@ -318,6 +326,7 @@ struct zfcp_rec_dbf_record { } __attribute__ ((packed)); enum { + ZFCP_REC_DBF_ID_ACTION, ZFCP_REC_DBF_ID_THREAD, ZFCP_REC_DBF_ID_TARGET, ZFCP_REC_DBF_ID_TRIGGER, diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index 55e034b10ded..335ab70181e8 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -893,8 +893,10 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action) "a_ca_disreq"); erp_action->fsf_req->status |= ZFCP_STATUS_FSFREQ_DISMISSED; + zfcp_rec_dbf_event_action(142, erp_action); } if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT) { + zfcp_rec_dbf_event_action(143, erp_action); ZFCP_LOG_NORMAL("error: erp step timed out " "(action=%d, fsf_req=%p)\n ", erp_action->action, @@ -3162,6 +3164,8 @@ zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action) debug_text_event(adapter->erp_dbf, 4, "a_actdeq"); debug_event(adapter->erp_dbf, 4, &erp_action->action, sizeof (int)); list_del(&erp_action->list); + zfcp_rec_dbf_event_action(144, erp_action); + switch (erp_action->action) { case ZFCP_ERP_ACTION_REOPEN_UNIT: atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE, @@ -3305,6 +3309,7 @@ static void zfcp_erp_action_to_running(struct zfcp_erp_action *erp_action) debug_text_event(adapter->erp_dbf, 6, "a_toru"); debug_event(adapter->erp_dbf, 6, &erp_action->action, sizeof (int)); list_move(&erp_action->list, &erp_action->adapter->erp_running_head); + zfcp_rec_dbf_event_action(145, erp_action); } static void zfcp_erp_action_to_ready(struct zfcp_erp_action *erp_action) @@ -3314,6 +3319,7 @@ static void zfcp_erp_action_to_ready(struct zfcp_erp_action *erp_action) debug_text_event(adapter->erp_dbf, 6, "a_tore"); debug_event(adapter->erp_dbf, 6, &erp_action->action, sizeof (int)); list_move(&erp_action->list, &erp_action->adapter->erp_ready_head); + zfcp_rec_dbf_event_action(146, erp_action); } void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h index e10ed145baa7..6de0147d84d8 100644 --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -173,6 +173,7 @@ extern void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit); extern void zfcp_rec_dbf_event_trigger(u8 id, u64 ref, u8 want, u8 need, u64 action, struct zfcp_adapter *, struct zfcp_port *, struct zfcp_unit *); +extern void zfcp_rec_dbf_event_action(u8 id, struct zfcp_erp_action *); extern void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *); extern void zfcp_hba_dbf_event_fsf_unsol(const char *, struct zfcp_adapter *, -- cgit v1.2.3-59-g8ed1b From 507e49693a074e878f20718fb97a5da01ccd9cbd Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 27 Mar 2008 14:22:05 +0100 Subject: [SCSI] zfcp: Remove obsolete erp_dbf trace This patch removes the now obsolete erp_dbf trace. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_aux.c | 1 - drivers/s390/scsi/zfcp_ccw.c | 3 - drivers/s390/scsi/zfcp_dbf.c | 11 -- drivers/s390/scsi/zfcp_def.h | 8 -- drivers/s390/scsi/zfcp_erp.c | 305 +----------------------------------------- drivers/s390/scsi/zfcp_fsf.c | 199 --------------------------- drivers/s390/scsi/zfcp_qdio.c | 2 - 7 files changed, 4 insertions(+), 525 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c index d2a744200c91..05a33c247c68 100644 --- a/drivers/s390/scsi/zfcp_aux.c +++ b/drivers/s390/scsi/zfcp_aux.c @@ -1030,7 +1030,6 @@ zfcp_adapter_enqueue(struct ccw_device *ccw_device) /* initialize debug locks */ - spin_lock_init(&adapter->erp_dbf_lock); spin_lock_init(&adapter->hba_dbf_lock); spin_lock_init(&adapter->san_dbf_lock); spin_lock_init(&adapter->scsi_dbf_lock); diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c index 8edd90583cd4..db9f538362a6 100644 --- a/drivers/s390/scsi/zfcp_ccw.c +++ b/drivers/s390/scsi/zfcp_ccw.c @@ -223,19 +223,16 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event) case CIO_GONE: ZFCP_LOG_NORMAL("adapter %s: device gone\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf,1,"dev_gone"); zfcp_erp_adapter_shutdown(adapter, 0, 87, 0); break; case CIO_NO_PATH: ZFCP_LOG_NORMAL("adapter %s: no path\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf,1,"no_path"); zfcp_erp_adapter_shutdown(adapter, 0, 88, 0); break; case CIO_OPER: ZFCP_LOG_NORMAL("adapter %s: operational again\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf,1,"dev_oper"); zfcp_erp_modify_adapter_status(adapter, 11, 0, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 466a689c538f..aecdc7f2dbc6 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -1301,15 +1301,6 @@ int zfcp_adapter_debug_register(struct zfcp_adapter *adapter) { char dbf_name[DEBUG_MAX_NAME_LEN]; - /* debug feature area which records recovery activity */ - sprintf(dbf_name, "zfcp_%s_erp", zfcp_get_busid_by_adapter(adapter)); - adapter->erp_dbf = debug_register(dbf_name, dbfsize, 2, - sizeof(struct zfcp_erp_dbf_record)); - if (!adapter->erp_dbf) - goto failed; - debug_register_view(adapter->erp_dbf, &debug_hex_ascii_view); - debug_set_level(adapter->erp_dbf, 3); - /* debug feature area which records recovery activity */ sprintf(dbf_name, "zfcp_%s_rec", zfcp_get_busid_by_adapter(adapter)); adapter->rec_dbf = debug_register(dbf_name, dbfsize, 1, @@ -1368,12 +1359,10 @@ void zfcp_adapter_debug_unregister(struct zfcp_adapter *adapter) debug_unregister(adapter->san_dbf); debug_unregister(adapter->hba_dbf); debug_unregister(adapter->rec_dbf); - debug_unregister(adapter->erp_dbf); adapter->scsi_dbf = NULL; adapter->san_dbf = NULL; adapter->hba_dbf = NULL; adapter->rec_dbf = NULL; - adapter->erp_dbf = NULL; } #undef ZFCP_LOG_AREA diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 9a4d870a820a..85c0488719f7 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -274,11 +274,6 @@ struct zfcp_dbf_dump { u8 data[]; /* dump data */ } __attribute__ ((packed)); -/* FIXME: to be inflated when reworking the erp dbf */ -struct zfcp_erp_dbf_record { - u8 dummy[16]; -} __attribute__ ((packed)); - struct zfcp_rec_dbf_record_thread { u32 sema; u32 total; @@ -969,17 +964,14 @@ struct zfcp_adapter { u32 erp_low_mem_count; /* nr of erp actions waiting for memory */ struct zfcp_port *nameserver_port; /* adapter's nameserver */ - debug_info_t *erp_dbf; debug_info_t *rec_dbf; debug_info_t *hba_dbf; debug_info_t *san_dbf; /* debug feature areas */ debug_info_t *scsi_dbf; - spinlock_t erp_dbf_lock; spinlock_t rec_dbf_lock; spinlock_t hba_dbf_lock; spinlock_t san_dbf_lock; spinlock_t scsi_dbf_lock; - struct zfcp_erp_dbf_record erp_dbf_buf; struct zfcp_rec_dbf_record rec_dbf_buf; struct zfcp_hba_dbf_record hba_dbf_buf; struct zfcp_san_dbf_record san_dbf_buf; diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index 335ab70181e8..feb1fda33d25 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -132,11 +132,9 @@ static void zfcp_close_qdio(struct zfcp_adapter *adapter) atomic_clear_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status); write_unlock_irq(&req_queue->queue_lock); - debug_text_event(adapter->erp_dbf, 3, "qdio_down2a"); while (qdio_shutdown(adapter->ccw_device, QDIO_FLAG_CLEANUP_USING_CLEAR) == -EINPROGRESS) ssleep(1); - debug_text_event(adapter->erp_dbf, 3, "qdio_down2b"); /* cleanup used outbound sbals */ count = atomic_read(&req_queue->free_count); @@ -209,7 +207,6 @@ static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, { int retval; - debug_text_event(adapter->erp_dbf, 5, "a_ro"); ZFCP_LOG_DEBUG("reopen adapter %s\n", zfcp_get_busid_by_adapter(adapter)); @@ -218,7 +215,6 @@ static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, if (atomic_test_mask(ZFCP_STATUS_COMMON_ERP_FAILED, &adapter->status)) { ZFCP_LOG_DEBUG("skipped reopen of failed adapter %s\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf, 5, "a_ro_f"); /* ensure propagation of failed status to new devices */ zfcp_erp_adapter_failed(adapter, 13, 0); retval = -EIO; @@ -403,7 +399,6 @@ zfcp_erp_adisc_handler(unsigned long data) "force physical port reopen " "(adapter %s, port d_id=0x%06x)\n", zfcp_get_busid_by_adapter(adapter), d_id); - debug_text_event(adapter->erp_dbf, 3, "forcreop"); if (zfcp_erp_port_forced_reopen(port, 0, 63, 0)) ZFCP_LOG_NORMAL("failed reopen of port " "(adapter %s, wwpn=0x%016Lx)\n", @@ -492,10 +487,6 @@ static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, int clear_mask, u8 id, u64 ref) { int retval; - struct zfcp_adapter *adapter = port->adapter; - - debug_text_event(adapter->erp_dbf, 5, "pf_ro"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); ZFCP_LOG_DEBUG("forced reopen of port 0x%016Lx on adapter %s\n", port->wwpn, zfcp_get_busid_by_port(port)); @@ -506,8 +497,6 @@ static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, ZFCP_LOG_DEBUG("skipped forced reopen of failed port 0x%016Lx " "on adapter %s\n", port->wwpn, zfcp_get_busid_by_port(port)); - debug_text_event(adapter->erp_dbf, 5, "pf_ro_f"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); retval = -EIO; goto out; } @@ -560,10 +549,6 @@ static int zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask, u8 id, u64 ref) { int retval; - struct zfcp_adapter *adapter = port->adapter; - - debug_text_event(adapter->erp_dbf, 5, "p_ro"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); ZFCP_LOG_DEBUG("reopen of port 0x%016Lx on adapter %s\n", port->wwpn, zfcp_get_busid_by_port(port)); @@ -574,8 +559,6 @@ static int zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask, ZFCP_LOG_DEBUG("skipped reopen of failed port 0x%016Lx " "on adapter %s\n", port->wwpn, zfcp_get_busid_by_port(port)); - debug_text_event(adapter->erp_dbf, 5, "p_ro_f"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); /* ensure propagation of failed status to new devices */ zfcp_erp_port_failed(port, 14, 0); retval = -EIO; @@ -630,8 +613,6 @@ static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask, int retval; struct zfcp_adapter *adapter = unit->port->adapter; - debug_text_event(adapter->erp_dbf, 5, "u_ro"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, sizeof (fcp_lun_t)); ZFCP_LOG_DEBUG("reopen of unit 0x%016Lx on port 0x%016Lx " "on adapter %s\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); @@ -643,9 +624,6 @@ static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask, "on port 0x%016Lx on adapter %s\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(adapter->erp_dbf, 5, "u_ro_f"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, - sizeof (fcp_lun_t)); retval = -EIO; goto out; } @@ -690,7 +668,6 @@ int zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask, u8 id, u64 ref) */ static void zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int clear_mask) { - debug_text_event(adapter->erp_dbf, 6, "a_bl"); zfcp_erp_modify_adapter_status(adapter, 15, 0, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); @@ -725,7 +702,6 @@ static int atomic_test_and_clear_mask(unsigned long mask, atomic_t *v) */ static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) { - debug_text_event(adapter->erp_dbf, 6, "a_ubl"); if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status)) zfcp_rec_dbf_event_adapter(16, 0, adapter); @@ -743,10 +719,6 @@ static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) static void zfcp_erp_port_block(struct zfcp_port *port, int clear_mask) { - struct zfcp_adapter *adapter = port->adapter; - - debug_text_event(adapter->erp_dbf, 6, "p_bl"); - debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); zfcp_erp_modify_port_status(port, 17, 0, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); @@ -762,10 +734,6 @@ zfcp_erp_port_block(struct zfcp_port *port, int clear_mask) static void zfcp_erp_port_unblock(struct zfcp_port *port) { - struct zfcp_adapter *adapter = port->adapter; - - debug_text_event(adapter->erp_dbf, 6, "p_ubl"); - debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status)) zfcp_rec_dbf_event_port(18, 0, port); @@ -783,10 +751,6 @@ zfcp_erp_port_unblock(struct zfcp_port *port) static void zfcp_erp_unit_block(struct zfcp_unit *unit, int clear_mask) { - struct zfcp_adapter *adapter = unit->port->adapter; - - debug_text_event(adapter->erp_dbf, 6, "u_bl"); - debug_event(adapter->erp_dbf, 6, &unit->fcp_lun, sizeof (fcp_lun_t)); zfcp_erp_modify_unit_status(unit, 19, 0, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); @@ -802,10 +766,6 @@ zfcp_erp_unit_block(struct zfcp_unit *unit, int clear_mask) static void zfcp_erp_unit_unblock(struct zfcp_unit *unit) { - struct zfcp_adapter *adapter = unit->port->adapter; - - debug_text_event(adapter->erp_dbf, 6, "u_ubl"); - debug_event(adapter->erp_dbf, 6, &unit->fcp_lun, sizeof (fcp_lun_t)); if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status)) zfcp_rec_dbf_event_unit(20, 0, unit); @@ -816,9 +776,6 @@ zfcp_erp_action_ready(struct zfcp_erp_action *erp_action) { struct zfcp_adapter *adapter = erp_action->adapter; - debug_text_event(adapter->erp_dbf, 4, "a_ar"); - debug_event(adapter->erp_dbf, 4, &erp_action->action, sizeof (int)); - zfcp_erp_action_to_ready(erp_action); up(&adapter->erp_ready_sem); zfcp_rec_dbf_event_thread(2, adapter, 0); @@ -883,14 +840,9 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action) if (zfcp_reqlist_find_safe(adapter, erp_action->fsf_req) && erp_action->fsf_req->erp_action == erp_action) { /* fsf_req still exists */ - debug_text_event(adapter->erp_dbf, 3, "a_ca_req"); - debug_event(adapter->erp_dbf, 3, &erp_action->fsf_req, - sizeof (unsigned long)); /* dismiss fsf_req of timed out/dismissed erp_action */ if (erp_action->status & (ZFCP_STATUS_ERP_DISMISSED | ZFCP_STATUS_ERP_TIMEDOUT)) { - debug_text_event(adapter->erp_dbf, 3, - "a_ca_disreq"); erp_action->fsf_req->status |= ZFCP_STATUS_FSFREQ_DISMISSED; zfcp_rec_dbf_event_action(142, erp_action); @@ -915,7 +867,6 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action) erp_action->fsf_req = NULL; } } else { - debug_text_event(adapter->erp_dbf, 3, "a_ca_gonereq"); /* * even if this fsf_req has gone, forget about * association between erp_action and fsf_req @@ -923,8 +874,7 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action) erp_action->fsf_req = NULL; } spin_unlock(&adapter->req_list_lock); - } else - debug_text_event(adapter->erp_dbf, 3, "a_ca_noreq"); + } } /** @@ -936,19 +886,11 @@ zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action) static void zfcp_erp_async_handler_nolock(struct zfcp_erp_action *erp_action, unsigned long set_mask) { - struct zfcp_adapter *adapter = erp_action->adapter; - if (zfcp_erp_action_exists(erp_action) == ZFCP_ERP_ACTION_RUNNING) { - debug_text_event(adapter->erp_dbf, 2, "a_asyh_ex"); - debug_event(adapter->erp_dbf, 2, &erp_action->action, - sizeof (int)); erp_action->status |= set_mask; zfcp_erp_action_ready(erp_action); } else { /* action is ready or gone - nothing to do */ - debug_text_event(adapter->erp_dbf, 3, "a_asyh_gone"); - debug_event(adapter->erp_dbf, 3, &erp_action->action, - sizeof (int)); } } @@ -975,10 +917,6 @@ static void zfcp_erp_memwait_handler(unsigned long data) { struct zfcp_erp_action *erp_action = (struct zfcp_erp_action *) data; - struct zfcp_adapter *adapter = erp_action->adapter; - - debug_text_event(adapter->erp_dbf, 2, "a_mwh"); - debug_event(adapter->erp_dbf, 2, &erp_action->action, sizeof (int)); zfcp_erp_async_handler(erp_action, 0); } @@ -991,10 +929,6 @@ zfcp_erp_memwait_handler(unsigned long data) static void zfcp_erp_timeout_handler(unsigned long data) { struct zfcp_erp_action *erp_action = (struct zfcp_erp_action *) data; - struct zfcp_adapter *adapter = erp_action->adapter; - - debug_text_event(adapter->erp_dbf, 2, "a_th"); - debug_event(adapter->erp_dbf, 2, &erp_action->action, sizeof (int)); zfcp_erp_async_handler(erp_action, ZFCP_STATUS_ERP_TIMEDOUT); } @@ -1009,11 +943,6 @@ static void zfcp_erp_timeout_handler(unsigned long data) */ static void zfcp_erp_action_dismiss(struct zfcp_erp_action *erp_action) { - struct zfcp_adapter *adapter = erp_action->adapter; - - debug_text_event(adapter->erp_dbf, 2, "a_adis"); - debug_event(adapter->erp_dbf, 2, &erp_action->action, sizeof (int)); - erp_action->status |= ZFCP_STATUS_ERP_DISMISSED; if (zfcp_erp_action_exists(erp_action) == ZFCP_ERP_ACTION_RUNNING) zfcp_erp_action_ready(erp_action); @@ -1031,12 +960,10 @@ zfcp_erp_thread_setup(struct zfcp_adapter *adapter) ZFCP_LOG_NORMAL("error: creation of erp thread failed for " "adapter %s\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf, 5, "a_thset_fail"); } else { wait_event(adapter->erp_thread_wqh, atomic_test_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_UP, &adapter->status)); - debug_text_event(adapter->erp_dbf, 5, "a_thset_ok"); } return (retval < 0); @@ -1072,8 +999,6 @@ zfcp_erp_thread_kill(struct zfcp_adapter *adapter) atomic_clear_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_KILL, &adapter->status); - debug_text_event(adapter->erp_dbf, 5, "a_thki_ok"); - return retval; } @@ -1096,7 +1021,6 @@ zfcp_erp_thread(void *data) /* Block all signals */ siginitsetinv(¤t->blocked, 0); atomic_set_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_UP, &adapter->status); - debug_text_event(adapter->erp_dbf, 5, "a_th_run"); wake_up(&adapter->erp_thread_wqh); while (!atomic_test_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_KILL, @@ -1124,11 +1048,9 @@ zfcp_erp_thread(void *data) zfcp_rec_dbf_event_thread(4, adapter, 1); down_interruptible(&adapter->erp_ready_sem); zfcp_rec_dbf_event_thread(5, adapter, 1); - debug_text_event(adapter->erp_dbf, 5, "a_th_woken"); } atomic_clear_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_UP, &adapter->status); - debug_text_event(adapter->erp_dbf, 5, "a_th_stop"); wake_up(&adapter->erp_thread_wqh); return 0; @@ -1164,7 +1086,6 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) /* dequeue dismissed action and leave, if required */ retval = zfcp_erp_strategy_check_action(erp_action, retval); if (retval == ZFCP_ERP_DISMISSED) { - debug_text_event(adapter->erp_dbf, 4, "a_st_dis1"); goto unlock; } @@ -1215,20 +1136,17 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) element was timed out. */ if (adapter->erp_total_count == adapter->erp_low_mem_count) { - debug_text_event(adapter->erp_dbf, 3, "a_st_lowmem"); ZFCP_LOG_NORMAL("error: no mempool elements available, " "restarting I/O on adapter %s " "to free mempool\n", zfcp_get_busid_by_adapter(adapter)); zfcp_erp_adapter_reopen_internal(adapter, 0, 66, 0); } else { - debug_text_event(adapter->erp_dbf, 2, "a_st_memw"); retval = zfcp_erp_strategy_memwait(erp_action); } goto unlock; case ZFCP_ERP_CONTINUES: /* leave since this action runs asynchronously */ - debug_text_event(adapter->erp_dbf, 6, "a_st_cont"); if (erp_action->status & ZFCP_STATUS_ERP_LOWMEM) { --adapter->erp_low_mem_count; erp_action->status &= ~ZFCP_STATUS_ERP_LOWMEM; @@ -1257,7 +1175,6 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) * action is repeated in order to process state change */ if (retval == ZFCP_ERP_EXIT) { - debug_text_event(adapter->erp_dbf, 2, "a_st_exit"); goto unlock; } @@ -1283,8 +1200,6 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) if (retval != ZFCP_ERP_DISMISSED) zfcp_erp_strategy_check_queues(adapter); - debug_text_event(adapter->erp_dbf, 6, "a_st_done"); - return retval; } @@ -1299,17 +1214,12 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) static int zfcp_erp_strategy_check_action(struct zfcp_erp_action *erp_action, int retval) { - struct zfcp_adapter *adapter = erp_action->adapter; - zfcp_erp_strategy_check_fsfreq(erp_action); - debug_event(adapter->erp_dbf, 5, &erp_action->action, sizeof (int)); if (erp_action->status & ZFCP_STATUS_ERP_DISMISSED) { - debug_text_event(adapter->erp_dbf, 3, "a_stcd_dis"); zfcp_erp_action_dequeue(erp_action); retval = ZFCP_ERP_DISMISSED; - } else - debug_text_event(adapter->erp_dbf, 5, "a_stcd_nodis"); + } return retval; } @@ -1318,7 +1228,6 @@ static int zfcp_erp_strategy_do_action(struct zfcp_erp_action *erp_action) { int retval = ZFCP_ERP_FAILED; - struct zfcp_adapter *adapter = erp_action->adapter; /* * try to execute/continue action as far as possible, @@ -1348,9 +1257,6 @@ zfcp_erp_strategy_do_action(struct zfcp_erp_action *erp_action) break; default: - debug_text_exception(adapter->erp_dbf, 1, "a_stda_bug"); - debug_event(adapter->erp_dbf, 1, &erp_action->action, - sizeof (int)); ZFCP_LOG_NORMAL("bug: unknown erp action requested on " "adapter %s (action=%d)\n", zfcp_get_busid_by_adapter(erp_action->adapter), @@ -1372,10 +1278,7 @@ static int zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action) { int retval = ZFCP_ERP_CONTINUES; - struct zfcp_adapter *adapter = erp_action->adapter; - debug_text_event(adapter->erp_dbf, 6, "a_mwinit"); - debug_event(adapter->erp_dbf, 6, &erp_action->action, sizeof (int)); init_timer(&erp_action->timer); erp_action->timer.function = zfcp_erp_memwait_handler; erp_action->timer.data = (unsigned long) erp_action; @@ -1398,7 +1301,6 @@ zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, u8 id, u64 ref) ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); ZFCP_LOG_NORMAL("adapter erp failed on adapter %s\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf, 2, "a_afail"); } /* @@ -1420,9 +1322,6 @@ zfcp_erp_port_failed(struct zfcp_port *port, u8 id, u64 ref) else ZFCP_LOG_NORMAL("port erp failed (adapter %s, wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), port->wwpn); - - debug_text_event(port->adapter->erp_dbf, 2, "p_pfail"); - debug_event(port->adapter->erp_dbf, 2, &port->wwpn, sizeof (wwn_t)); } /* @@ -1440,9 +1339,6 @@ zfcp_erp_unit_failed(struct zfcp_unit *unit, u8 id, u64 ref) ZFCP_LOG_NORMAL("unit erp failed on unit 0x%016Lx on port 0x%016Lx " " on adapter %s\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(unit->port->adapter->erp_dbf, 2, "u_ufail"); - debug_event(unit->port->adapter->erp_dbf, 2, - &unit->fcp_lun, sizeof (fcp_lun_t)); } /* @@ -1466,10 +1362,6 @@ zfcp_erp_strategy_check_target(struct zfcp_erp_action *erp_action, int result) struct zfcp_port *port = erp_action->port; struct zfcp_unit *unit = erp_action->unit; - debug_text_event(adapter->erp_dbf, 5, "a_stct_norm"); - debug_event(adapter->erp_dbf, 5, &erp_action->action, sizeof (int)); - debug_event(adapter->erp_dbf, 5, &result, sizeof (int)); - switch (erp_action->action) { case ZFCP_ERP_ACTION_REOPEN_UNIT: @@ -1496,9 +1388,6 @@ zfcp_erp_strategy_statechange(int action, struct zfcp_port *port, struct zfcp_unit *unit, int retval) { - debug_text_event(adapter->erp_dbf, 3, "a_stsc"); - debug_event(adapter->erp_dbf, 3, &action, sizeof (int)); - switch (action) { case ZFCP_ERP_ACTION_REOPEN_ADAPTER: @@ -1551,10 +1440,6 @@ zfcp_erp_strategy_statechange_detected(atomic_t * target_status, u32 erp_status) static int zfcp_erp_strategy_check_unit(struct zfcp_unit *unit, int result) { - debug_text_event(unit->port->adapter->erp_dbf, 5, "u_stct"); - debug_event(unit->port->adapter->erp_dbf, 5, &unit->fcp_lun, - sizeof (fcp_lun_t)); - switch (result) { case ZFCP_ERP_SUCCEEDED : atomic_set(&unit->erp_counter, 0); @@ -1581,9 +1466,6 @@ zfcp_erp_strategy_check_unit(struct zfcp_unit *unit, int result) static int zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) { - debug_text_event(port->adapter->erp_dbf, 5, "p_stct"); - debug_event(port->adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); - switch (result) { case ZFCP_ERP_SUCCEEDED : atomic_set(&port->erp_counter, 0); @@ -1610,8 +1492,6 @@ zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) static int zfcp_erp_strategy_check_adapter(struct zfcp_adapter *adapter, int result) { - debug_text_event(adapter->erp_dbf, 5, "a_stct"); - switch (result) { case ZFCP_ERP_SUCCEEDED : atomic_set(&adapter->erp_counter, 0); @@ -1703,9 +1583,6 @@ zfcp_erp_strategy_followup_actions(int action, struct zfcp_port *port, struct zfcp_unit *unit, int status) { - debug_text_event(adapter->erp_dbf, 5, "a_stfol"); - debug_event(adapter->erp_dbf, 5, &action, sizeof (int)); - /* initiate follow-up actions depending on success of finished action */ switch (action) { @@ -1749,12 +1626,10 @@ zfcp_erp_strategy_check_queues(struct zfcp_adapter *adapter) read_lock(&adapter->erp_lock); if (list_empty(&adapter->erp_ready_head) && list_empty(&adapter->erp_running_head)) { - debug_text_event(adapter->erp_dbf, 4, "a_cq_wake"); atomic_clear_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status); wake_up(&adapter->erp_done_wqh); - } else - debug_text_event(adapter->erp_dbf, 5, "a_cq_notempty"); + } read_unlock(&adapter->erp_lock); read_unlock_irqrestore(&zfcp_data.config_lock, flags); @@ -1786,16 +1661,13 @@ void zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, u8 id, if (set_or_clear == ZFCP_SET) { changed = atomic_test_and_set_mask(mask, &adapter->status); - debug_text_event(adapter->erp_dbf, 3, "a_mod_as_s"); } else { changed = atomic_test_and_clear_mask(mask, &adapter->status); if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) atomic_set(&adapter->erp_counter, 0); - debug_text_event(adapter->erp_dbf, 3, "a_mod_as_c"); } if (changed) zfcp_rec_dbf_event_adapter(id, ref, adapter); - debug_event(adapter->erp_dbf, 3, &mask, sizeof (u32)); /* Deal with all underlying devices, only pass common_mask */ if (common_mask) @@ -1818,17 +1690,13 @@ void zfcp_erp_modify_port_status(struct zfcp_port *port, u8 id, u64 ref, if (set_or_clear == ZFCP_SET) { changed = atomic_test_and_set_mask(mask, &port->status); - debug_text_event(port->adapter->erp_dbf, 3, "p_mod_ps_s"); } else { changed = atomic_test_and_clear_mask(mask, &port->status); if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) atomic_set(&port->erp_counter, 0); - debug_text_event(port->adapter->erp_dbf, 3, "p_mod_ps_c"); } if (changed) zfcp_rec_dbf_event_port(id, ref, port); - debug_event(port->adapter->erp_dbf, 3, &port->wwpn, sizeof (wwn_t)); - debug_event(port->adapter->erp_dbf, 3, &mask, sizeof (u32)); /* Modify status of all underlying devices, only pass common mask */ if (common_mask) @@ -1850,19 +1718,14 @@ void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, u64 ref, if (set_or_clear == ZFCP_SET) { changed = atomic_test_and_set_mask(mask, &unit->status); - debug_text_event(unit->port->adapter->erp_dbf, 3, "u_mod_us_s"); } else { changed = atomic_test_and_clear_mask(mask, &unit->status); if (mask & ZFCP_STATUS_COMMON_ERP_FAILED) { atomic_set(&unit->erp_counter, 0); } - debug_text_event(unit->port->adapter->erp_dbf, 3, "u_mod_us_c"); } if (changed) zfcp_rec_dbf_event_unit(id, ref, unit); - debug_event(unit->port->adapter->erp_dbf, 3, &unit->fcp_lun, - sizeof (fcp_lun_t)); - debug_event(unit->port->adapter->erp_dbf, 3, &mask, sizeof (u32)); } /* @@ -1946,10 +1809,6 @@ zfcp_erp_adapter_strategy(struct zfcp_erp_action *erp_action) else retval = zfcp_erp_adapter_strategy_open(erp_action); - debug_text_event(adapter->erp_dbf, 3, "a_ast/ret"); - debug_event(adapter->erp_dbf, 3, &erp_action->action, sizeof (int)); - debug_event(adapter->erp_dbf, 3, &retval, sizeof (int)); - if (retval == ZFCP_ERP_FAILED) { ZFCP_LOG_INFO("Waiting to allow the adapter %s " "to recover itself\n", @@ -2075,7 +1934,6 @@ zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *erp_action) zfcp_get_busid_by_adapter(adapter)); goto failed_qdio_establish; } - debug_text_event(adapter->erp_dbf, 3, "qdio_est"); if (qdio_activate(adapter->ccw_device, 0) != 0) { ZFCP_LOG_INFO("error: activation of QDIO queues failed " @@ -2083,7 +1941,6 @@ zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *erp_action) zfcp_get_busid_by_adapter(adapter)); goto failed_qdio_activate; } - debug_text_event(adapter->erp_dbf, 3, "qdio_act"); /* * put buffers into response queue, @@ -2131,11 +1988,9 @@ zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *erp_action) /* NOP */ failed_qdio_activate: - debug_text_event(adapter->erp_dbf, 3, "qdio_down1a"); while (qdio_shutdown(adapter->ccw_device, QDIO_FLAG_CLEANUP_USING_CLEAR) == -EINPROGRESS) ssleep(1); - debug_text_event(adapter->erp_dbf, 3, "qdio_down1b"); failed_qdio_establish: failed_sanity: @@ -2181,14 +2036,12 @@ zfcp_erp_adapter_strategy_open_fsf_xconfig(struct zfcp_erp_action *erp_action) write_unlock_irq(&adapter->erp_lock); if (zfcp_fsf_exchange_config_data(erp_action)) { retval = ZFCP_ERP_FAILED; - debug_text_event(adapter->erp_dbf, 5, "a_fstx_xf"); ZFCP_LOG_INFO("error: initiation of exchange of " "configuration data failed for " "adapter %s\n", zfcp_get_busid_by_adapter(adapter)); break; } - debug_text_event(adapter->erp_dbf, 6, "a_fstx_xok"); ZFCP_LOG_DEBUG("Xchange underway\n"); /* @@ -2254,13 +2107,10 @@ zfcp_erp_adapter_strategy_open_fsf_xport(struct zfcp_erp_action *erp_action) ret = zfcp_fsf_exchange_port_data(erp_action); if (ret == -EOPNOTSUPP) { - debug_text_event(adapter->erp_dbf, 3, "a_xport_notsupp"); return ZFCP_ERP_SUCCEEDED; } else if (ret) { - debug_text_event(adapter->erp_dbf, 3, "a_xport_failed"); return ZFCP_ERP_FAILED; } - debug_text_event(adapter->erp_dbf, 6, "a_xport_ok"); ret = ZFCP_ERP_SUCCEEDED; zfcp_rec_dbf_event_thread(8, adapter, 1); @@ -2319,7 +2169,6 @@ zfcp_erp_port_forced_strategy(struct zfcp_erp_action *erp_action) { int retval = ZFCP_ERP_FAILED; struct zfcp_port *port = erp_action->port; - struct zfcp_adapter *adapter = erp_action->adapter; switch (erp_action->step) { @@ -2356,11 +2205,6 @@ zfcp_erp_port_forced_strategy(struct zfcp_erp_action *erp_action) break; } - debug_text_event(adapter->erp_dbf, 3, "p_pfst/ret"); - debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof (wwn_t)); - debug_event(adapter->erp_dbf, 3, &erp_action->action, sizeof (int)); - debug_event(adapter->erp_dbf, 3, &retval, sizeof (int)); - return retval; } @@ -2378,7 +2222,6 @@ zfcp_erp_port_strategy(struct zfcp_erp_action *erp_action) { int retval = ZFCP_ERP_FAILED; struct zfcp_port *port = erp_action->port; - struct zfcp_adapter *adapter = erp_action->adapter; switch (erp_action->step) { @@ -2411,11 +2254,6 @@ zfcp_erp_port_strategy(struct zfcp_erp_action *erp_action) retval = zfcp_erp_port_strategy_open(erp_action); out: - debug_text_event(adapter->erp_dbf, 3, "p_pst/ret"); - debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof (wwn_t)); - debug_event(adapter->erp_dbf, 3, &erp_action->action, sizeof (int)); - debug_event(adapter->erp_dbf, 3, &retval, sizeof (int)); - return retval; } @@ -2607,13 +2445,7 @@ zfcp_erp_port_strategy_open_nameserver_wakeup(struct zfcp_erp_action read_lock_irqsave(&adapter->erp_lock, flags); list_for_each_entry_safe(erp_action, tmp, &adapter->erp_running_head, list) { - debug_text_event(adapter->erp_dbf, 4, "p_pstnsw_n"); - debug_event(adapter->erp_dbf, 4, &erp_action->port->wwpn, - sizeof (wwn_t)); if (erp_action->step == ZFCP_ERP_STEP_NAMESERVER_OPEN) { - debug_text_event(adapter->erp_dbf, 3, "p_pstnsw_w"); - debug_event(adapter->erp_dbf, 3, - &erp_action->port->wwpn, sizeof (wwn_t)); if (atomic_test_mask( ZFCP_STATUS_COMMON_ERP_FAILED, &adapter->nameserver_port->status)) @@ -2638,26 +2470,18 @@ static int zfcp_erp_port_forced_strategy_close(struct zfcp_erp_action *erp_action) { int retval; - struct zfcp_adapter *adapter = erp_action->adapter; - struct zfcp_port *port = erp_action->port; retval = zfcp_fsf_close_physical_port(erp_action); if (retval == -ENOMEM) { - debug_text_event(adapter->erp_dbf, 5, "o_pfstc_nomem"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_NOMEM; goto out; } erp_action->step = ZFCP_ERP_STEP_PHYS_PORT_CLOSING; if (retval != 0) { - debug_text_event(adapter->erp_dbf, 5, "o_pfstc_cpf"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); /* could not send 'open', fail */ retval = ZFCP_ERP_FAILED; goto out; } - debug_text_event(adapter->erp_dbf, 6, "o_pfstc_cpok"); - debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_CONTINUES; out: return retval; @@ -2667,10 +2491,6 @@ static int zfcp_erp_port_strategy_clearstati(struct zfcp_port *port) { int retval = 0; - struct zfcp_adapter *adapter = port->adapter; - - debug_text_event(adapter->erp_dbf, 5, "p_pstclst"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); atomic_clear_mask(ZFCP_STATUS_COMMON_OPENING | ZFCP_STATUS_COMMON_CLOSING | @@ -2694,26 +2514,18 @@ static int zfcp_erp_port_strategy_close(struct zfcp_erp_action *erp_action) { int retval; - struct zfcp_adapter *adapter = erp_action->adapter; - struct zfcp_port *port = erp_action->port; retval = zfcp_fsf_close_port(erp_action); if (retval == -ENOMEM) { - debug_text_event(adapter->erp_dbf, 5, "p_pstc_nomem"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_NOMEM; goto out; } erp_action->step = ZFCP_ERP_STEP_PORT_CLOSING; if (retval != 0) { - debug_text_event(adapter->erp_dbf, 5, "p_pstc_cpf"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); /* could not send 'close', fail */ retval = ZFCP_ERP_FAILED; goto out; } - debug_text_event(adapter->erp_dbf, 6, "p_pstc_cpok"); - debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_CONTINUES; out: return retval; @@ -2731,26 +2543,18 @@ static int zfcp_erp_port_strategy_open_port(struct zfcp_erp_action *erp_action) { int retval; - struct zfcp_adapter *adapter = erp_action->adapter; - struct zfcp_port *port = erp_action->port; retval = zfcp_fsf_open_port(erp_action); if (retval == -ENOMEM) { - debug_text_event(adapter->erp_dbf, 5, "p_psto_nomem"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_NOMEM; goto out; } erp_action->step = ZFCP_ERP_STEP_PORT_OPENING; if (retval != 0) { - debug_text_event(adapter->erp_dbf, 5, "p_psto_opf"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); /* could not send 'open', fail */ retval = ZFCP_ERP_FAILED; goto out; } - debug_text_event(adapter->erp_dbf, 6, "p_psto_opok"); - debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_CONTINUES; out: return retval; @@ -2768,26 +2572,18 @@ static int zfcp_erp_port_strategy_open_common_lookup(struct zfcp_erp_action *erp_action) { int retval; - struct zfcp_adapter *adapter = erp_action->adapter; - struct zfcp_port *port = erp_action->port; retval = zfcp_ns_gid_pn_request(erp_action); if (retval == -ENOMEM) { - debug_text_event(adapter->erp_dbf, 5, "p_pstn_nomem"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_NOMEM; goto out; } erp_action->step = ZFCP_ERP_STEP_NAMESERVER_LOOKUP; if (retval != 0) { - debug_text_event(adapter->erp_dbf, 5, "p_pstn_ref"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); /* could not send nameserver request, fail */ retval = ZFCP_ERP_FAILED; goto out; } - debug_text_event(adapter->erp_dbf, 6, "p_pstn_reok"); - debug_event(adapter->erp_dbf, 6, &port->wwpn, sizeof (wwn_t)); retval = ZFCP_ERP_CONTINUES; out: return retval; @@ -2808,7 +2604,6 @@ zfcp_erp_unit_strategy(struct zfcp_erp_action *erp_action) { int retval = ZFCP_ERP_FAILED; struct zfcp_unit *unit = erp_action->unit; - struct zfcp_adapter *adapter = erp_action->adapter; switch (erp_action->step) { @@ -2855,10 +2650,6 @@ zfcp_erp_unit_strategy(struct zfcp_erp_action *erp_action) break; } - debug_text_event(adapter->erp_dbf, 3, "u_ust/ret"); - debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof (fcp_lun_t)); - debug_event(adapter->erp_dbf, 3, &erp_action->action, sizeof (int)); - debug_event(adapter->erp_dbf, 3, &retval, sizeof (int)); return retval; } @@ -2866,10 +2657,6 @@ static int zfcp_erp_unit_strategy_clearstati(struct zfcp_unit *unit) { int retval = 0; - struct zfcp_adapter *adapter = unit->port->adapter; - - debug_text_event(adapter->erp_dbf, 5, "u_ustclst"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, sizeof (fcp_lun_t)); atomic_clear_mask(ZFCP_STATUS_COMMON_OPENING | ZFCP_STATUS_COMMON_CLOSING | @@ -2893,28 +2680,18 @@ static int zfcp_erp_unit_strategy_close(struct zfcp_erp_action *erp_action) { int retval; - struct zfcp_adapter *adapter = erp_action->adapter; - struct zfcp_unit *unit = erp_action->unit; retval = zfcp_fsf_close_unit(erp_action); if (retval == -ENOMEM) { - debug_text_event(adapter->erp_dbf, 5, "u_ustc_nomem"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, - sizeof (fcp_lun_t)); retval = ZFCP_ERP_NOMEM; goto out; } erp_action->step = ZFCP_ERP_STEP_UNIT_CLOSING; if (retval != 0) { - debug_text_event(adapter->erp_dbf, 5, "u_ustc_cuf"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, - sizeof (fcp_lun_t)); /* could not send 'close', fail */ retval = ZFCP_ERP_FAILED; goto out; } - debug_text_event(adapter->erp_dbf, 6, "u_ustc_cuok"); - debug_event(adapter->erp_dbf, 6, &unit->fcp_lun, sizeof (fcp_lun_t)); retval = ZFCP_ERP_CONTINUES; out: @@ -2933,28 +2710,18 @@ static int zfcp_erp_unit_strategy_open(struct zfcp_erp_action *erp_action) { int retval; - struct zfcp_adapter *adapter = erp_action->adapter; - struct zfcp_unit *unit = erp_action->unit; retval = zfcp_fsf_open_unit(erp_action); if (retval == -ENOMEM) { - debug_text_event(adapter->erp_dbf, 5, "u_usto_nomem"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, - sizeof (fcp_lun_t)); retval = ZFCP_ERP_NOMEM; goto out; } erp_action->step = ZFCP_ERP_STEP_UNIT_OPENING; if (retval != 0) { - debug_text_event(adapter->erp_dbf, 5, "u_usto_ouf"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, - sizeof (fcp_lun_t)); /* could not send 'open', fail */ retval = ZFCP_ERP_FAILED; goto out; } - debug_text_event(adapter->erp_dbf, 6, "u_usto_ouok"); - debug_event(adapter->erp_dbf, 6, &unit->fcp_lun, sizeof (fcp_lun_t)); retval = ZFCP_ERP_CONTINUES; out: return retval; @@ -3000,17 +2767,11 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, &adapter->status)) return -EIO; - debug_event(adapter->erp_dbf, 4, &want, sizeof (int)); /* check whether we really need this */ switch (want) { case ZFCP_ERP_ACTION_REOPEN_UNIT: if (atomic_test_mask (ZFCP_STATUS_COMMON_ERP_INUSE, &unit->status)) { - debug_text_event(adapter->erp_dbf, 4, "u_actenq_drp"); - debug_event(adapter->erp_dbf, 4, &port->wwpn, - sizeof (wwn_t)); - debug_event(adapter->erp_dbf, 4, &unit->fcp_lun, - sizeof (fcp_lun_t)); goto out; } if (!atomic_test_mask @@ -3027,9 +2788,6 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, case ZFCP_ERP_ACTION_REOPEN_PORT: if (atomic_test_mask (ZFCP_STATUS_COMMON_ERP_INUSE, &port->status)) { - debug_text_event(adapter->erp_dbf, 4, "p_actenq_drp"); - debug_event(adapter->erp_dbf, 4, &port->wwpn, - sizeof (wwn_t)); goto out; } /* fall through !!! */ @@ -3043,13 +2801,7 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, "0x%016Lx, action in use: %i)\n", want, port->wwpn, port->erp_action.action); - debug_text_event(adapter->erp_dbf, 4, - "pf_actenq_drp"); - } else - debug_text_event(adapter->erp_dbf, 4, - "pf_actenq_drpcp"); - debug_event(adapter->erp_dbf, 4, &port->wwpn, - sizeof (wwn_t)); + } goto out; } if (!atomic_test_mask @@ -3066,14 +2818,11 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, case ZFCP_ERP_ACTION_REOPEN_ADAPTER: if (atomic_test_mask (ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status)) { - debug_text_event(adapter->erp_dbf, 4, "a_actenq_drp"); goto out; } break; default: - debug_text_exception(adapter->erp_dbf, 1, "a_actenq_bug"); - debug_event(adapter->erp_dbf, 1, &want, sizeof (int)); ZFCP_LOG_NORMAL("bug: unknown erp action requested " "on adapter %s (action=%d)\n", zfcp_get_busid_by_adapter(adapter), want); @@ -3082,9 +2831,6 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, /* check whether we need something stronger first */ if (need) { - debug_text_event(adapter->erp_dbf, 4, "a_actenq_str"); - debug_event(adapter->erp_dbf, 4, &need, - sizeof (int)); ZFCP_LOG_DEBUG("stronger erp action %d needed before " "erp action %d on adapter %s\n", need, want, zfcp_get_busid_by_adapter(adapter)); @@ -3127,8 +2873,6 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, break; } - debug_text_event(adapter->erp_dbf, 4, "a_actenq"); - memset(erp_action, 0, sizeof (struct zfcp_erp_action)); erp_action->adapter = adapter; erp_action->port = port; @@ -3161,8 +2905,6 @@ zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action) erp_action->status &= ~ZFCP_STATUS_ERP_LOWMEM; } - debug_text_event(adapter->erp_dbf, 4, "a_actdeq"); - debug_event(adapter->erp_dbf, 4, &erp_action->action, sizeof (int)); list_del(&erp_action->list); zfcp_rec_dbf_event_action(144, erp_action); @@ -3270,7 +3012,6 @@ static void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter) { struct zfcp_port *port; - debug_text_event(adapter->erp_dbf, 5, "a_actab"); if (atomic_test_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status)) zfcp_erp_action_dismiss(&adapter->erp_action); else @@ -3281,10 +3022,7 @@ static void zfcp_erp_action_dismiss_adapter(struct zfcp_adapter *adapter) static void zfcp_erp_action_dismiss_port(struct zfcp_port *port) { struct zfcp_unit *unit; - struct zfcp_adapter *adapter = port->adapter; - debug_text_event(adapter->erp_dbf, 5, "p_actab"); - debug_event(adapter->erp_dbf, 5, &port->wwpn, sizeof (wwn_t)); if (atomic_test_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status)) zfcp_erp_action_dismiss(&port->erp_action); else @@ -3294,41 +3032,26 @@ static void zfcp_erp_action_dismiss_port(struct zfcp_port *port) static void zfcp_erp_action_dismiss_unit(struct zfcp_unit *unit) { - struct zfcp_adapter *adapter = unit->port->adapter; - - debug_text_event(adapter->erp_dbf, 5, "u_actab"); - debug_event(adapter->erp_dbf, 5, &unit->fcp_lun, sizeof (fcp_lun_t)); if (atomic_test_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &unit->status)) zfcp_erp_action_dismiss(&unit->erp_action); } static void zfcp_erp_action_to_running(struct zfcp_erp_action *erp_action) { - struct zfcp_adapter *adapter = erp_action->adapter; - - debug_text_event(adapter->erp_dbf, 6, "a_toru"); - debug_event(adapter->erp_dbf, 6, &erp_action->action, sizeof (int)); list_move(&erp_action->list, &erp_action->adapter->erp_running_head); zfcp_rec_dbf_event_action(145, erp_action); } static void zfcp_erp_action_to_ready(struct zfcp_erp_action *erp_action) { - struct zfcp_adapter *adapter = erp_action->adapter; - - debug_text_event(adapter->erp_dbf, 6, "a_tore"); - debug_event(adapter->erp_dbf, 6, &erp_action->action, sizeof (int)); list_move(&erp_action->list, &erp_action->adapter->erp_ready_head); zfcp_rec_dbf_event_action(146, erp_action); } void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) { - struct zfcp_adapter *adapter = port->adapter; unsigned long flags; - debug_text_event(adapter->erp_dbf, 3, "p_access_boxed"); - debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof(wwn_t)); read_lock_irqsave(&zfcp_data.config_lock, flags); zfcp_erp_modify_port_status(port, id, ref, ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); @@ -3338,10 +3061,6 @@ void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, u64 ref) { - struct zfcp_adapter *adapter = unit->port->adapter; - - debug_text_event(adapter->erp_dbf, 3, "u_access_boxed"); - debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof(fcp_lun_t)); zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref); @@ -3349,11 +3068,8 @@ void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, u64 ref) void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, u64 ref) { - struct zfcp_adapter *adapter = port->adapter; unsigned long flags; - debug_text_event(adapter->erp_dbf, 3, "p_access_denied"); - debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof(wwn_t)); read_lock_irqsave(&zfcp_data.config_lock, flags); zfcp_erp_modify_port_status(port, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED | @@ -3363,10 +3079,6 @@ void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, u64 ref) void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, u64 ref) { - struct zfcp_adapter *adapter = unit->port->adapter; - - debug_text_event(adapter->erp_dbf, 3, "u_access_denied"); - debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof(fcp_lun_t)); zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED | ZFCP_STATUS_COMMON_ACCESS_DENIED, ZFCP_SET); @@ -3381,9 +3093,6 @@ void zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter, u8 id, if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) return; - debug_text_event(adapter->erp_dbf, 3, "a_access_recover"); - debug_event(adapter->erp_dbf, 3, zfcp_get_busid_by_adapter(adapter), 8); - read_lock_irqsave(&zfcp_data.config_lock, flags); if (adapter->nameserver_port) zfcp_erp_port_access_changed(adapter->nameserver_port, id, ref); @@ -3398,9 +3107,6 @@ void zfcp_erp_port_access_changed(struct zfcp_port *port, u8 id, u64 ref) struct zfcp_adapter *adapter = port->adapter; struct zfcp_unit *unit; - debug_text_event(adapter->erp_dbf, 3, "p_access_recover"); - debug_event(adapter->erp_dbf, 3, &port->wwpn, sizeof(wwn_t)); - if (!atomic_test_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED, &port->status) && !atomic_test_mask(ZFCP_STATUS_COMMON_ACCESS_BOXED, @@ -3424,9 +3130,6 @@ void zfcp_erp_unit_access_changed(struct zfcp_unit *unit, u8 id, u64 ref) { struct zfcp_adapter *adapter = unit->port->adapter; - debug_text_event(adapter->erp_dbf, 3, "u_access_recover"); - debug_event(adapter->erp_dbf, 3, &unit->fcp_lun, sizeof(fcp_lun_t)); - if (!atomic_test_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED, &unit->status) && !atomic_test_mask(ZFCP_STATUS_COMMON_ACCESS_BOXED, diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index ffdf99736a4c..b7aa9696ba60 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -799,19 +799,14 @@ zfcp_fsf_status_read_port_closed(struct zfcp_fsf_req *fsf_req) switch (status_buffer->status_subtype) { case FSF_STATUS_READ_SUB_CLOSE_PHYS_PORT: - debug_text_event(adapter->erp_dbf, 3, "unsol_pc_phys:"); zfcp_erp_port_reopen(port, 0, 101, (u64)fsf_req); break; case FSF_STATUS_READ_SUB_ERROR_PORT: - debug_text_event(adapter->erp_dbf, 1, "unsol_pc_err:"); zfcp_erp_port_shutdown(port, 0, 122, (u64)fsf_req); break; default: - debug_text_event(adapter->erp_dbf, 0, "unsol_unk_sub:"); - debug_exception(adapter->erp_dbf, 0, - &status_buffer->status_subtype, sizeof (u32)); ZFCP_LOG_NORMAL("bug: Undefined status subtype received " "for a reopen indication on port with " "d_id 0x%06x on the adapter %s. " @@ -1002,7 +997,6 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) break; case FSF_STATUS_READ_FEATURE_UPDATE_ALERT: - debug_text_event(adapter->erp_dbf, 2, "unsol_features:"); ZFCP_LOG_INFO("List of supported features on adapter %s has " "been changed from 0x%08X to 0x%08X\n", zfcp_get_busid_by_adapter(adapter), @@ -1151,8 +1145,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) case FSF_PORT_HANDLE_NOT_VALID: if (fsf_stat_qual->word[0] != fsf_stat_qual->word[1]) { - debug_text_event(new_fsf_req->adapter->erp_dbf, 3, - "fsf_s_phand_nv0"); /* * In this case a command that was sent prior to a port * reopen was aborted (handles are different). This is @@ -1171,8 +1163,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) fsf_status_qual, sizeof (union fsf_status_qual)); /* Let's hope this sorts out the mess */ - debug_text_event(new_fsf_req->adapter->erp_dbf, 1, - "fsf_s_phand_nv1"); zfcp_erp_adapter_reopen(unit->port->adapter, 0, 104, (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -1181,8 +1171,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) case FSF_LUN_HANDLE_NOT_VALID: if (fsf_stat_qual->word[0] != fsf_stat_qual->word[1]) { - debug_text_event(new_fsf_req->adapter->erp_dbf, 3, - "fsf_s_lhand_nv0"); /* * In this case a command that was sent prior to a unit * reopen was aborted (handles are different). @@ -1204,8 +1192,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) fsf_status_qual, sizeof (union fsf_status_qual)); /* Let's hope this sorts out the mess */ - debug_text_event(new_fsf_req->adapter->erp_dbf, 1, - "fsf_s_lhand_nv1"); zfcp_erp_port_reopen(unit->port, 0, 105, (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -1214,8 +1200,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) case FSF_FCP_COMMAND_DOES_NOT_EXIST: retval = 0; - debug_text_event(new_fsf_req->adapter->erp_dbf, 3, - "fsf_s_no_exist"); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED; break; @@ -1223,8 +1207,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) ZFCP_LOG_INFO("Remote port 0x%016Lx on adapter %s needs to " "be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(new_fsf_req->adapter->erp_dbf, 2, - "fsf_s_pboxed"); zfcp_erp_port_boxed(unit->port, 47, (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -1236,7 +1218,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) "to be reopened\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(new_fsf_req->adapter->erp_dbf, 1, "fsf_s_lboxed"); zfcp_erp_unit_boxed(unit, 48, (u64)new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -1245,26 +1226,17 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) case FSF_ADAPTER_STATUS_AVAILABLE: switch (new_fsf_req->qtcb->header.fsf_status_qual.word[0]) { case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: - debug_text_event(new_fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ltest"); zfcp_test_link(unit->port); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* SCSI stack will escalate */ - debug_text_event(new_fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ulp"); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: ZFCP_LOG_NORMAL ("bug: Wrong status qualifier 0x%x arrived.\n", new_fsf_req->qtcb->header.fsf_status_qual.word[0]); - debug_text_event(new_fsf_req->adapter->erp_dbf, 0, - "fsf_sq_inval:"); - debug_exception(new_fsf_req->adapter->erp_dbf, 0, - &new_fsf_req->qtcb->header. - fsf_status_qual.word[0], sizeof (u32)); break; } break; @@ -1278,11 +1250,6 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", new_fsf_req->qtcb->header.fsf_status); - debug_text_event(new_fsf_req->adapter->erp_dbf, 0, - "fsf_s_inval:"); - debug_exception(new_fsf_req->adapter->erp_dbf, 0, - &new_fsf_req->qtcb->header.fsf_status, - sizeof (u32)); break; } skip_fsfstatus: @@ -1485,7 +1452,6 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_port(port), ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ - debug_text_exception(adapter->erp_dbf, 0, "fsf_s_class_nsup"); zfcp_erp_adapter_shutdown(adapter, 0, 123, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1494,13 +1460,11 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) switch (header->fsf_status_qual.word[0]){ case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: /* reopening link to port */ - debug_text_event(adapter->erp_dbf, 1, "fsf_sq_ltest"); zfcp_test_link(port); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* ERP strategy will escalate */ - debug_text_event(adapter->erp_dbf, 1, "fsf_sq_ulp"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: @@ -1528,7 +1492,6 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) break; } } - debug_text_event(adapter->erp_dbf, 1, "fsf_s_access"); zfcp_erp_port_access_denied(port, 55, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1541,7 +1504,6 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_INFO, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(adapter->erp_dbf, 1, "fsf_s_gcom_rej"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1554,7 +1516,6 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_INFO, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(adapter->erp_dbf, 1, "fsf_s_phandle_nv"); zfcp_erp_adapter_reopen(adapter, 0, 106, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1563,7 +1524,6 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_INFO("port needs to be reopened " "(adapter %s, port d_id=0x%06x)\n", zfcp_get_busid_by_port(port), port->d_id); - debug_text_event(adapter->erp_dbf, 2, "fsf_s_pboxed"); zfcp_erp_port_boxed(port, 49, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -1603,9 +1563,6 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) default: ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", header->fsf_status); - debug_text_event(adapter->erp_dbf, 0, "fsf_sq_inval:"); - debug_exception(adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], sizeof (u32)); break; } @@ -1789,7 +1746,6 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_adapter(adapter), ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ - debug_text_exception(adapter->erp_dbf, 0, "fsf_s_class_nsup"); zfcp_erp_adapter_shutdown(adapter, 0, 124, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1797,13 +1753,11 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) case FSF_ADAPTER_STATUS_AVAILABLE: switch (header->fsf_status_qual.word[0]){ case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: - debug_text_event(adapter->erp_dbf, 1, "fsf_sq_ltest"); if (port && (send_els->ls_code != ZFCP_LS_ADISC)) zfcp_test_link(port); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: - debug_text_event(adapter->erp_dbf, 1, "fsf_sq_ulp"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; retval = zfcp_handle_els_rjt(header->fsf_status_qual.word[1], @@ -1811,7 +1765,6 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) &header->fsf_status_qual.word[2]); break; case FSF_SQ_RETRY_IF_POSSIBLE: - debug_text_event(adapter->erp_dbf, 1, "fsf_sq_retry"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: @@ -1888,7 +1841,6 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) break; } } - debug_text_event(adapter->erp_dbf, 1, "fsf_s_access"); if (port != NULL) zfcp_erp_port_access_denied(port, 56, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -1900,9 +1852,6 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) "(adapter: %s, fsf_status=0x%08x)\n", zfcp_get_busid_by_adapter(adapter), header->fsf_status); - debug_text_event(adapter->erp_dbf, 0, "fsf_sq_inval"); - debug_exception(adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], sizeof(u32)); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; } @@ -2111,7 +2060,6 @@ zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *fsf_req, int xchg_ok) "versions in comparison to this device " "driver (try updated device driver)\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf, 0, "low_qtcb_ver"); zfcp_erp_adapter_shutdown(adapter, 0, 125, (u64)fsf_req); return -EIO; } @@ -2121,7 +2069,6 @@ zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *fsf_req, int xchg_ok) "versions than this device driver uses" "(consider a microcode upgrade)\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(adapter->erp_dbf, 0, "high_qtcb_ver"); zfcp_erp_adapter_shutdown(adapter, 0, 126, (u64)fsf_req); return -EIO; } @@ -2162,16 +2109,12 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) adapter->peer_wwnn, adapter->peer_wwpn, adapter->peer_d_id); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "top-p-to-p"); break; case FC_PORTTYPE_NLPORT: ZFCP_LOG_NORMAL("error: Arbitrated loop fibrechannel " "topology detected at adapter %s " "unsupported, shutting down adapter\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "top-al"); zfcp_erp_adapter_shutdown(adapter, 0, 127, (u64)fsf_req); return -EIO; case FC_PORTTYPE_NPORT: @@ -2187,8 +2130,6 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) "of a type known to the zfcp " "driver, shutting down adapter\n", zfcp_get_busid_by_adapter(adapter)); - debug_text_exception(fsf_req->adapter->erp_dbf, 0, - "unknown-topo"); zfcp_erp_adapter_shutdown(adapter, 0, 128, (u64)fsf_req); return -EIO; } @@ -2201,10 +2142,6 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) bottom->max_qtcb_size, zfcp_get_busid_by_adapter(adapter), sizeof(struct fsf_qtcb)); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "qtcb-size"); - debug_event(fsf_req->adapter->erp_dbf, 0, - &bottom->max_qtcb_size, sizeof (u32)); zfcp_erp_adapter_shutdown(adapter, 0, 129, (u64)fsf_req); return -EIO; } @@ -2212,8 +2149,6 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) &adapter->status); break; case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE: - debug_text_event(adapter->erp_dbf, 0, "xchg-inco"); - if (zfcp_fsf_exchange_config_evaluate(fsf_req, 0)) return -EIO; @@ -2224,9 +2159,6 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) &qtcb->header.fsf_status_qual.link_down_info); break; default: - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf-stat-ng"); - debug_event(fsf_req->adapter->erp_dbf, 0, - &fsf_req->qtcb->header.fsf_status, sizeof(u32)); zfcp_erp_adapter_shutdown(adapter, 0, 130, (u64)fsf_req); return -EIO; } @@ -2406,10 +2338,6 @@ zfcp_fsf_exchange_port_data_handler(struct zfcp_fsf_req *fsf_req) zfcp_fsf_link_down_info_eval(fsf_req, 43, &qtcb->header.fsf_status_qual.link_down_info); break; - default: - debug_text_event(adapter->erp_dbf, 0, "xchg-port-ng"); - debug_event(adapter->erp_dbf, 0, - &fsf_req->qtcb->header.fsf_status, sizeof(u32)); } } @@ -2507,8 +2435,6 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL("bug: remote port 0x%016Lx on adapter %s " "is already open.\n", port->wwpn, zfcp_get_busid_by_port(port)); - debug_text_exception(fsf_req->adapter->erp_dbf, 0, - "fsf_s_popen"); /* * This is a bug, however operation should continue normally * if it is simply ignored @@ -2532,7 +2458,6 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) break; } } - debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_access"); zfcp_erp_port_access_denied(port, 57, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2542,8 +2467,6 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) "The remote port 0x%016Lx on adapter %s " "could not be opened. Disabling it.\n", port->wwpn, zfcp_get_busid_by_port(port)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_max_ports"); zfcp_erp_port_failed(port, 31, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2551,15 +2474,11 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) case FSF_ADAPTER_STATUS_AVAILABLE: switch (header->fsf_status_qual.word[0]) { case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ltest"); /* ERP strategy will escalate */ fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* ERP strategy will escalate */ - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ulp"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_NO_RETRY_POSSIBLE: @@ -2568,8 +2487,6 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) "Disabling it.\n", port->wwpn, zfcp_get_busid_by_port(port)); - debug_text_exception(fsf_req->adapter->erp_dbf, 0, - "fsf_sq_no_retry"); zfcp_erp_port_failed(port, 32, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2577,12 +2494,6 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL ("bug: Wrong status qualifier 0x%x arrived.\n", header->fsf_status_qual.word[0]); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_sq_inval:"); - debug_exception( - fsf_req->adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], - sizeof (u32)); break; } break; @@ -2625,17 +2536,12 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) "warning: insufficient length of " "PLOGI payload (%i)\n", fsf_req->qtcb->bottom.support.els1_length); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_s_short_plogi:"); /* skip sanity check and assume wwpn is ok */ } else { if (plogi->serv_param.wwpn != port->wwpn) { ZFCP_LOG_INFO("warning: d_id of port " "0x%016Lx changed during " "open\n", port->wwpn); - debug_text_event( - fsf_req->adapter->erp_dbf, 0, - "fsf_s_did_change:"); atomic_clear_mask( ZFCP_STATUS_PORT_DID_DID, &port->status); @@ -2660,9 +2566,6 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", header->fsf_status); - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_inval:"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &header->fsf_status, sizeof (u32)); break; } @@ -2766,8 +2669,6 @@ zfcp_fsf_close_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &fsf_req->qtcb->header.fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_phand_nv"); zfcp_erp_adapter_reopen(port->adapter, 0, 107, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2793,10 +2694,6 @@ zfcp_fsf_close_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", fsf_req->qtcb->header.fsf_status); - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_inval:"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &fsf_req->qtcb->header.fsf_status, - sizeof (u32)); break; } @@ -2909,8 +2806,6 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_phand_nv"); zfcp_erp_adapter_reopen(port->adapter, 0, 108, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2932,7 +2827,6 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) break; } } - debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_access"); zfcp_erp_port_access_denied(port, 58, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2943,7 +2837,6 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) "to close it physically.\n", port->wwpn, zfcp_get_busid_by_port(port)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_pboxed"); zfcp_erp_port_boxed(port, 50, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -2959,26 +2852,17 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) case FSF_ADAPTER_STATUS_AVAILABLE: switch (header->fsf_status_qual.word[0]) { case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ltest"); /* This will now be escalated by ERP */ fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* ERP strategy will escalate */ - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ulp"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: ZFCP_LOG_NORMAL ("bug: Wrong status qualifier 0x%x arrived.\n", header->fsf_status_qual.word[0]); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_sq_inval:"); - debug_exception( - fsf_req->adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], sizeof (u32)); break; } break; @@ -3001,9 +2885,6 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", header->fsf_status); - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_inval:"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &header->fsf_status, sizeof (u32)); break; } @@ -3135,7 +3016,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(adapter->erp_dbf, 1, "fsf_s_ph_nv"); zfcp_erp_adapter_reopen(unit->port->adapter, 0, 109, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3146,8 +3026,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) "remote port 0x%016Lx on adapter %s twice.\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_exception(adapter->erp_dbf, 0, - "fsf_s_uopen"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3169,7 +3047,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) break; } } - debug_text_event(adapter->erp_dbf, 1, "fsf_s_access"); zfcp_erp_unit_access_denied(unit, 59, (u64)fsf_req); atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status); atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status); @@ -3180,7 +3057,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_DEBUG("The remote port 0x%016Lx on adapter %s " "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(adapter->erp_dbf, 2, "fsf_s_pboxed"); zfcp_erp_port_boxed(unit->port, 51, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -3221,8 +3097,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(adapter->erp_dbf, 2, - "fsf_s_l_sh_vio"); zfcp_erp_unit_access_denied(unit, 60, (u64)fsf_req); atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status); atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status); @@ -3237,8 +3111,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(adapter->erp_dbf, 1, - "fsf_s_max_units"); zfcp_erp_unit_failed(unit, 34, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3247,26 +3119,17 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) switch (header->fsf_status_qual.word[0]) { case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: /* Re-establish link to port */ - debug_text_event(adapter->erp_dbf, 1, - "fsf_sq_ltest"); zfcp_test_link(unit->port); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* ERP strategy will escalate */ - debug_text_event(adapter->erp_dbf, 1, - "fsf_sq_ulp"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: ZFCP_LOG_NORMAL ("bug: Wrong status qualifier 0x%x arrived.\n", header->fsf_status_qual.word[0]); - debug_text_event(adapter->erp_dbf, 0, - "fsf_sq_inval:"); - debug_exception(adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], - sizeof (u32)); } break; @@ -3339,9 +3202,6 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", header->fsf_status); - debug_text_event(adapter->erp_dbf, 0, "fsf_s_inval:"); - debug_exception(adapter->erp_dbf, 0, - &header->fsf_status, sizeof (u32)); break; } @@ -3454,8 +3314,6 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &fsf_req->qtcb->header.fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_phand_nv"); zfcp_erp_adapter_reopen(unit->port->adapter, 0, 110, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3473,8 +3331,6 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &fsf_req->qtcb->header.fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_lhand_nv"); zfcp_erp_port_reopen(unit->port, 0, 111, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3484,7 +3340,6 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_s_pboxed"); zfcp_erp_port_boxed(unit->port, 52, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -3494,27 +3349,17 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) switch (fsf_req->qtcb->header.fsf_status_qual.word[0]) { case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: /* re-establish link to port */ - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ltest"); zfcp_test_link(unit->port); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* ERP strategy will escalate */ - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ulp"); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: ZFCP_LOG_NORMAL ("bug: Wrong status qualifier 0x%x arrived.\n", fsf_req->qtcb->header.fsf_status_qual.word[0]); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_sq_inval:"); - debug_exception( - fsf_req->adapter->erp_dbf, 0, - &fsf_req->qtcb->header.fsf_status_qual.word[0], - sizeof (u32)); break; } break; @@ -3535,10 +3380,6 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_NORMAL("bug: An unknown FSF Status was presented " "(debug info 0x%x)\n", fsf_req->qtcb->header.fsf_status); - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_inval:"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &fsf_req->qtcb->header.fsf_status, - sizeof (u32)); break; } @@ -3851,8 +3692,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_phand_nv"); zfcp_erp_adapter_reopen(unit->port->adapter, 0, 112, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3870,8 +3709,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_uhand_nv"); zfcp_erp_port_reopen(unit->port, 0, 113, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3888,8 +3725,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_hand_mis"); zfcp_erp_adapter_reopen(unit->port->adapter, 0, 114, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3901,8 +3736,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ - debug_text_exception(fsf_req->adapter->erp_dbf, 0, - "fsf_s_class_nsup"); zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 132, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3920,8 +3753,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_s_fcp_lun_nv"); zfcp_erp_port_reopen(unit->port, 0, 115, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3944,7 +3775,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) break; } } - debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_access"); zfcp_erp_unit_access_denied(unit, 61, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3958,8 +3788,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), fsf_req->qtcb->bottom.io.data_direction); /* stop operation for this adapter */ - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_s_dir_ind_nv"); zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 133, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3974,8 +3802,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), fsf_req->qtcb->bottom.io.fcp_cmnd_length); /* stop operation for this adapter */ - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_s_cmd_len_nv"); zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 134, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3985,7 +3811,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_DEBUG("The remote port 0x%016Lx on adapter %s " "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_s_pboxed"); zfcp_erp_port_boxed(unit->port, 53, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -3996,7 +3821,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) "wwpn=0x%016Lx, fcp_lun=0x%016Lx)\n", zfcp_get_busid_by_unit(unit), unit->port->wwpn, unit->fcp_lun); - debug_text_event(fsf_req->adapter->erp_dbf, 1, "fsf_s_lboxed"); zfcp_erp_unit_boxed(unit, 54, (u64)fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -4006,25 +3830,16 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) switch (header->fsf_status_qual.word[0]) { case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: /* re-establish link to port */ - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ltest"); zfcp_test_link(unit->port); break; case FSF_SQ_ULP_DEPENDENT_ERP_REQUIRED: /* FIXME(hw) need proper specs for proper action */ /* let scsi stack deal with retries and escalation */ - debug_text_event(fsf_req->adapter->erp_dbf, 1, - "fsf_sq_ulp"); break; default: ZFCP_LOG_NORMAL ("Unknown status qualifier 0x%x arrived.\n", header->fsf_status_qual.word[0]); - debug_text_event(fsf_req->adapter->erp_dbf, 0, - "fsf_sq_inval:"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], - sizeof(u32)); break; } fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -4035,12 +3850,6 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) case FSF_FCP_RSP_AVAILABLE: break; - - default: - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_s_inval:"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &header->fsf_status, sizeof(u32)); - break; } skip_fsfstatus: @@ -4620,9 +4429,6 @@ zfcp_fsf_control_file_handler(struct zfcp_fsf_req *fsf_req) "was presented on the adapter %s\n", header->fsf_status, zfcp_get_busid_by_adapter(adapter)); - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_sq_inval"); - debug_exception(fsf_req->adapter->erp_dbf, 0, - &header->fsf_status_qual.word[0], sizeof(u32)); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; retval = -EINVAL; break; @@ -4812,7 +4618,6 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *fsf_req) volatile struct qdio_buffer_element *sbale; int inc_seq_no; int new_distance_from_int; - u64 dbg_tmp[2]; int retval = 0; adapter = fsf_req->adapter; @@ -4862,10 +4667,6 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *fsf_req) QDIO_FLAG_SYNC_OUTPUT, 0, fsf_req->sbal_first, fsf_req->sbal_number, NULL); - dbg_tmp[0] = (unsigned long) sbale[0].addr; - dbg_tmp[1] = (u64) retval; - debug_event(adapter->erp_dbf, 4, (void *) dbg_tmp, 16); - if (unlikely(retval)) { /* Queues are down..... */ retval = -EIO; diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c index d742d0a9df77..5d60a4116aff 100644 --- a/drivers/s390/scsi/zfcp_qdio.c +++ b/drivers/s390/scsi/zfcp_qdio.c @@ -239,8 +239,6 @@ static void zfcp_qdio_reqid_check(struct zfcp_adapter *adapter, struct zfcp_fsf_req *fsf_req; unsigned long flags; - debug_long_event(adapter->erp_dbf, 4, req_id); - spin_lock_irqsave(&adapter->req_list_lock, flags); fsf_req = zfcp_reqlist_find(adapter, req_id); -- cgit v1.2.3-59-g8ed1b From 8fc5af168753239d7bf77ccca831196bcdffbfbe Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:23 +0200 Subject: [SCSI] zfcp: simplify zfcp_dbf_timestamp() Change zfcp_dbf_timestamp() so that it just calculates timespec from timestamp. First step to be able to rip this code out of zfcp. Besides, this change makes it easier to rip out old-style debug view functions. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index aecdc7f2dbc6..edd93533db40 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -49,23 +49,17 @@ static void zfcp_dbf_hexdump(debug_info_t *dbf, void *to, int to_len, } } -static int -zfcp_dbf_stck(char *out_buf, const char *label, unsigned long long stck) +/* FIXME: this duplicate this code in s390 debug feature */ +static void zfcp_dbf_timestamp(unsigned long long stck, struct timespec *time) { unsigned long long sec; - struct timespec dbftime; - int len = 0; stck -= 0x8126d60e46000000LL - (0x3c26700LL * 1000000 * 4096); sec = stck >> 12; do_div(sec, 1000000); - dbftime.tv_sec = sec; + time->tv_sec = sec; stck -= (sec * 1000000) << 12; - dbftime.tv_nsec = ((stck * 1000) >> 12); - len += sprintf(out_buf + len, "%-24s%011lu:%06lu\n", - label, dbftime.tv_sec, dbftime.tv_nsec); - - return len; + time->tv_nsec = ((stck * 1000) >> 12); } static int zfcp_dbf_tag(char *out_buf, const char *label, const char *tag) @@ -146,10 +140,12 @@ zfcp_dbf_view_header(debug_info_t * id, struct debug_view *view, int area, { struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)DEBUG_DATA(entry); int len = 0; + struct timespec t; if (strncmp(dump->tag, "dump", ZFCP_DBF_TAG_SIZE) != 0) { - len += zfcp_dbf_stck(out_buf + len, "timestamp", - entry->id.stck); + zfcp_dbf_timestamp(entry->id.stck, &t); + len += zfcp_dbf_view(out_buf + len, "timestamp", "%011lu:%06lu", + t.tv_sec, t.tv_nsec); len += zfcp_dbf_view(out_buf + len, "cpu", "%02i", entry->id.fields.cpuid); } else { @@ -363,6 +359,7 @@ zfcp_hba_dbf_view_response(char *out_buf, struct zfcp_hba_dbf_record_response *rec) { int len = 0; + struct timespec t; len += zfcp_dbf_view(out_buf + len, "fsf_command", "0x%08x", rec->fsf_command); @@ -370,7 +367,9 @@ zfcp_hba_dbf_view_response(char *out_buf, rec->fsf_reqid); len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", rec->fsf_seqno); - len += zfcp_dbf_stck(out_buf + len, "fsf_issued", rec->fsf_issued); + zfcp_dbf_timestamp(rec->fsf_issued, &t); + len += zfcp_dbf_view(out_buf + len, "fsf_issued", "%011lu:%06lu", + t.tv_sec, t.tv_nsec); len += zfcp_dbf_view(out_buf + len, "fsf_prot_status", "0x%08x", rec->fsf_prot_status); len += zfcp_dbf_view(out_buf + len, "fsf_status", "0x%08x", @@ -1222,6 +1221,7 @@ zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, struct zfcp_scsi_dbf_record *rec = (struct zfcp_scsi_dbf_record *)in_buf; int len = 0; + struct timespec t; if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) return 0; @@ -1253,7 +1253,9 @@ zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, rec->fsf_reqid); len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", rec->fsf_seqno); - len += zfcp_dbf_stck(out_buf + len, "fsf_issued", rec->fsf_issued); + zfcp_dbf_timestamp(rec->fsf_issued, &t); + len += zfcp_dbf_view(out_buf + len, "fsf_issued", "%011lu:%06lu", + t.tv_sec, t.tv_nsec); if (strncmp(rec->tag, "rslt", ZFCP_DBF_TAG_SIZE) == 0) { len += zfcp_dbf_view(out_buf + len, "fcp_rsp_validity", "0x%02x", -- cgit v1.2.3-59-g8ed1b From b634fff743be5e6010c5cbe36ea1e68ff56a6aee Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:24 +0200 Subject: [SCSI] zfcp: Cleanup debug trace view functions. Improve readability of code by using more convenient output function. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 382 ++++++++++++++++++------------------------- 1 file changed, 162 insertions(+), 220 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index edd93533db40..15b534206b34 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -139,25 +139,21 @@ zfcp_dbf_view_header(debug_info_t * id, struct debug_view *view, int area, debug_entry_t * entry, char *out_buf) { struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)DEBUG_DATA(entry); - int len = 0; struct timespec t; + char *p = out_buf; if (strncmp(dump->tag, "dump", ZFCP_DBF_TAG_SIZE) != 0) { zfcp_dbf_timestamp(entry->id.stck, &t); - len += zfcp_dbf_view(out_buf + len, "timestamp", "%011lu:%06lu", - t.tv_sec, t.tv_nsec); - len += zfcp_dbf_view(out_buf + len, "cpu", "%02i", - entry->id.fields.cpuid); - } else { - len += zfcp_dbf_view_dump(out_buf + len, NULL, - dump->data, - dump->size, - dump->offset, dump->total_size); + zfcp_dbf_out(&p, "timestamp", "%011lu:%06lu", + t.tv_sec, t.tv_nsec); + zfcp_dbf_out(&p, "cpu", "%02i", entry->id.fields.cpuid); + } else { + p += zfcp_dbf_view_dump(p, NULL, dump->data, dump->size, + dump->offset, dump->total_size); if ((dump->offset + dump->size) == dump->total_size) - len += sprintf(out_buf + len, "\n"); + p += sprintf(p, "\n"); } - - return len; + return p - out_buf; } void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) @@ -354,82 +350,65 @@ zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } -static int -zfcp_hba_dbf_view_response(char *out_buf, - struct zfcp_hba_dbf_record_response *rec) +static int zfcp_hba_dbf_view_response(char *buf, + struct zfcp_hba_dbf_record_response *r) { - int len = 0; struct timespec t; + char *p = buf; - len += zfcp_dbf_view(out_buf + len, "fsf_command", "0x%08x", - rec->fsf_command); - len += zfcp_dbf_view(out_buf + len, "fsf_reqid", "0x%0Lx", - rec->fsf_reqid); - len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", - rec->fsf_seqno); - zfcp_dbf_timestamp(rec->fsf_issued, &t); - len += zfcp_dbf_view(out_buf + len, "fsf_issued", "%011lu:%06lu", - t.tv_sec, t.tv_nsec); - len += zfcp_dbf_view(out_buf + len, "fsf_prot_status", "0x%08x", - rec->fsf_prot_status); - len += zfcp_dbf_view(out_buf + len, "fsf_status", "0x%08x", - rec->fsf_status); - len += zfcp_dbf_view_dump(out_buf + len, "fsf_prot_status_qual", - rec->fsf_prot_status_qual, - FSF_PROT_STATUS_QUAL_SIZE, - 0, FSF_PROT_STATUS_QUAL_SIZE); - len += zfcp_dbf_view_dump(out_buf + len, "fsf_status_qual", - rec->fsf_status_qual, - FSF_STATUS_QUALIFIER_SIZE, - 0, FSF_STATUS_QUALIFIER_SIZE); - len += zfcp_dbf_view(out_buf + len, "fsf_req_status", "0x%08x", - rec->fsf_req_status); - len += zfcp_dbf_view(out_buf + len, "sbal_first", "0x%02x", - rec->sbal_first); - len += zfcp_dbf_view(out_buf + len, "sbal_curr", "0x%02x", - rec->sbal_curr); - len += zfcp_dbf_view(out_buf + len, "sbal_last", "0x%02x", - rec->sbal_last); - len += zfcp_dbf_view(out_buf + len, "pool", "0x%02x", rec->pool); - - switch (rec->fsf_command) { + zfcp_dbf_out(&p, "fsf_command", "0x%08x", r->fsf_command); + zfcp_dbf_out(&p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); + zfcp_dbf_out(&p, "fsf_seqno", "0x%08x", r->fsf_seqno); + zfcp_dbf_timestamp(r->fsf_issued, &t); + zfcp_dbf_out(&p, "fsf_issued", "%011lu:%06lu", t.tv_sec, t.tv_nsec); + zfcp_dbf_out(&p, "fsf_prot_status", "0x%08x", r->fsf_prot_status); + zfcp_dbf_out(&p, "fsf_status", "0x%08x", r->fsf_status); + p += zfcp_dbf_view_dump(p, "fsf_prot_status_qual", + r->fsf_prot_status_qual, + FSF_PROT_STATUS_QUAL_SIZE, + 0, FSF_PROT_STATUS_QUAL_SIZE); + p += zfcp_dbf_view_dump(p, "fsf_status_qual", + r->fsf_status_qual, + FSF_STATUS_QUALIFIER_SIZE, + 0, FSF_STATUS_QUALIFIER_SIZE); + zfcp_dbf_out(&p, "fsf_req_status", "0x%08x", r->fsf_req_status); + zfcp_dbf_out(&p, "sbal_first", "0x%02x", r->sbal_first); + zfcp_dbf_out(&p, "sbal_curr", "0x%02x", r->sbal_curr); + zfcp_dbf_out(&p, "sbal_last", "0x%02x", r->sbal_last); + zfcp_dbf_out(&p, "pool", "0x%02x", r->pool); + + switch (r->fsf_command) { case FSF_QTCB_FCP_CMND: - if (rec->fsf_req_status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) + if (r->fsf_req_status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) break; - len += zfcp_dbf_view(out_buf + len, "scsi_cmnd", "0x%0Lx", - rec->data.send_fcp.scsi_cmnd); - len += zfcp_dbf_view(out_buf + len, "scsi_serial", "0x%016Lx", - rec->data.send_fcp.scsi_serial); + zfcp_dbf_out(&p, "scsi_cmnd", "0x%0Lx", + r->data.send_fcp.scsi_cmnd); + zfcp_dbf_out(&p, "scsi_serial", "0x%016Lx", + r->data.send_fcp.scsi_serial); break; case FSF_QTCB_OPEN_PORT_WITH_DID: case FSF_QTCB_CLOSE_PORT: case FSF_QTCB_CLOSE_PHYSICAL_PORT: - len += zfcp_dbf_view(out_buf + len, "wwpn", "0x%016Lx", - rec->data.port.wwpn); - len += zfcp_dbf_view(out_buf + len, "d_id", "0x%06x", - rec->data.port.d_id); - len += zfcp_dbf_view(out_buf + len, "port_handle", "0x%08x", - rec->data.port.port_handle); + zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->data.port.wwpn); + zfcp_dbf_out(&p, "d_id", "0x%06x", r->data.port.d_id); + zfcp_dbf_out(&p, "port_handle", "0x%08x", + r->data.port.port_handle); break; case FSF_QTCB_OPEN_LUN: case FSF_QTCB_CLOSE_LUN: - len += zfcp_dbf_view(out_buf + len, "wwpn", "0x%016Lx", - rec->data.unit.wwpn); - len += zfcp_dbf_view(out_buf + len, "fcp_lun", "0x%016Lx", - rec->data.unit.fcp_lun); - len += zfcp_dbf_view(out_buf + len, "port_handle", "0x%08x", - rec->data.unit.port_handle); - len += zfcp_dbf_view(out_buf + len, "lun_handle", "0x%08x", - rec->data.unit.lun_handle); + zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->data.unit.wwpn); + zfcp_dbf_out(&p, "fcp_lun", "0x%016Lx", r->data.unit.fcp_lun); + zfcp_dbf_out(&p, "port_handle", "0x%08x", + r->data.unit.port_handle); + zfcp_dbf_out(&p, "lun_handle", "0x%08x", + r->data.unit.lun_handle); break; case FSF_QTCB_SEND_ELS: - len += zfcp_dbf_view(out_buf + len, "d_id", "0x%06x", - rec->data.send_els.d_id); - len += zfcp_dbf_view(out_buf + len, "ls_code", "0x%02x", - rec->data.send_els.ls_code); + zfcp_dbf_out(&p, "d_id", "0x%06x", r->data.send_els.d_id); + zfcp_dbf_out(&p, "ls_code", "0x%02x", r->data.send_els.ls_code); break; case FSF_QTCB_ABORT_FCP_CMND: @@ -440,47 +419,36 @@ zfcp_hba_dbf_view_response(char *out_buf, case FSF_QTCB_UPLOAD_CONTROL_FILE: break; } - - return len; + return p - buf; } -static int -zfcp_hba_dbf_view_status(char *out_buf, struct zfcp_hba_dbf_record_status *rec) +static int zfcp_hba_dbf_view_status(char *buf, + struct zfcp_hba_dbf_record_status *r) { - int len = 0; - - len += zfcp_dbf_view(out_buf + len, "failed", "0x%02x", rec->failed); - len += zfcp_dbf_view(out_buf + len, "status_type", "0x%08x", - rec->status_type); - len += zfcp_dbf_view(out_buf + len, "status_subtype", "0x%08x", - rec->status_subtype); - len += zfcp_dbf_view_dump(out_buf + len, "queue_designator", - (char *)&rec->queue_designator, - sizeof(struct fsf_queue_designator), - 0, sizeof(struct fsf_queue_designator)); - len += zfcp_dbf_view_dump(out_buf + len, "payload", - (char *)&rec->payload, - rec->payload_size, 0, rec->payload_size); + char *p = buf; - return len; + zfcp_dbf_out(&p, "failed", "0x%02x", r->failed); + zfcp_dbf_out(&p, "status_type", "0x%08x", r->status_type); + zfcp_dbf_out(&p, "status_subtype", "0x%08x", r->status_subtype); + p += zfcp_dbf_view_dump(p, "queue_designator", + (char *)&r->queue_designator, + sizeof(struct fsf_queue_designator), + 0, sizeof(struct fsf_queue_designator)); + p += zfcp_dbf_view_dump(p, "payload", (char *)&r->payload, + r->payload_size, 0, r->payload_size); + return p - buf; } -static int -zfcp_hba_dbf_view_qdio(char *out_buf, struct zfcp_hba_dbf_record_qdio *rec) +static int zfcp_hba_dbf_view_qdio(char *buf, struct zfcp_hba_dbf_record_qdio *r) { - int len = 0; - - len += zfcp_dbf_view(out_buf + len, "status", "0x%08x", rec->status); - len += zfcp_dbf_view(out_buf + len, "qdio_error", "0x%08x", - rec->qdio_error); - len += zfcp_dbf_view(out_buf + len, "siga_error", "0x%08x", - rec->siga_error); - len += zfcp_dbf_view(out_buf + len, "sbal_index", "0x%02x", - rec->sbal_index); - len += zfcp_dbf_view(out_buf + len, "sbal_count", "0x%02x", - rec->sbal_count); + char *p = buf; - return len; + zfcp_dbf_out(&p, "status", "0x%08x", r->status); + zfcp_dbf_out(&p, "qdio_error", "0x%08x", r->qdio_error); + zfcp_dbf_out(&p, "siga_error", "0x%08x", r->siga_error); + zfcp_dbf_out(&p, "sbal_index", "0x%02x", r->sbal_index); + zfcp_dbf_out(&p, "sbal_count", "0x%02x", r->sbal_count); + return p - buf; } static int @@ -720,8 +688,8 @@ static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_out(&p, "step", "0x%08Lx", r->u.action.step); break; } - sprintf(p, "\n"); - return (p - buf) + 1; + p += sprintf(p, "\n"); + return p - buf; } static struct debug_view zfcp_rec_dbf_view = { @@ -1024,71 +992,65 @@ static int zfcp_san_dbf_view_format(debug_info_t * id, struct debug_view *view, char *out_buf, const char *in_buf) { - struct zfcp_san_dbf_record *rec = (struct zfcp_san_dbf_record *)in_buf; + struct zfcp_san_dbf_record *r = (struct zfcp_san_dbf_record *)in_buf; char *buffer = NULL; int buflen = 0, total = 0; - int len = 0; + char *p = out_buf; - if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) + if (strncmp(r->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) return 0; - len += zfcp_dbf_tag(out_buf + len, "tag", rec->tag); - len += zfcp_dbf_view(out_buf + len, "fsf_reqid", "0x%0Lx", - rec->fsf_reqid); - len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", - rec->fsf_seqno); - len += zfcp_dbf_view(out_buf + len, "s_id", "0x%06x", rec->s_id); - len += zfcp_dbf_view(out_buf + len, "d_id", "0x%06x", rec->d_id); - - if (strncmp(rec->tag, "octc", ZFCP_DBF_TAG_SIZE) == 0) { - len += zfcp_dbf_view(out_buf + len, "cmd_req_code", "0x%04x", - rec->type.ct.type.request.cmd_req_code); - len += zfcp_dbf_view(out_buf + len, "revision", "0x%02x", - rec->type.ct.type.request.revision); - len += zfcp_dbf_view(out_buf + len, "gs_type", "0x%02x", - rec->type.ct.type.request.gs_type); - len += zfcp_dbf_view(out_buf + len, "gs_subtype", "0x%02x", - rec->type.ct.type.request.gs_subtype); - len += zfcp_dbf_view(out_buf + len, "options", "0x%02x", - rec->type.ct.type.request.options); - len += zfcp_dbf_view(out_buf + len, "max_res_size", "0x%04x", - rec->type.ct.type.request.max_res_size); - total = rec->type.ct.payload_size; - buffer = rec->type.ct.payload; + p += zfcp_dbf_tag(p, "tag", r->tag); + zfcp_dbf_out(&p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); + zfcp_dbf_out(&p, "fsf_seqno", "0x%08x", r->fsf_seqno); + zfcp_dbf_out(&p, "s_id", "0x%06x", r->s_id); + zfcp_dbf_out(&p, "d_id", "0x%06x", r->d_id); + + if (strncmp(r->tag, "octc", ZFCP_DBF_TAG_SIZE) == 0) { + /* FIXME: struct zfcp_dbf_ct_req *ct = ...; */ + zfcp_dbf_out(&p, "cmd_req_code", "0x%04x", + r->type.ct.type.request.cmd_req_code); + zfcp_dbf_out(&p, "revision", "0x%02x", + r->type.ct.type.request.revision); + zfcp_dbf_out(&p, "gs_type", "0x%02x", + r->type.ct.type.request.gs_type); + zfcp_dbf_out(&p, "gs_subtype", "0x%02x", + r->type.ct.type.request.gs_subtype); + zfcp_dbf_out(&p, "options", "0x%02x", + r->type.ct.type.request.options); + zfcp_dbf_out(&p, "max_res_size", "0x%04x", + r->type.ct.type.request.max_res_size); + total = r->type.ct.payload_size; + buffer = r->type.ct.payload; buflen = min(total, ZFCP_DBF_CT_PAYLOAD); - } else if (strncmp(rec->tag, "rctc", ZFCP_DBF_TAG_SIZE) == 0) { - len += zfcp_dbf_view(out_buf + len, "cmd_rsp_code", "0x%04x", - rec->type.ct.type.response.cmd_rsp_code); - len += zfcp_dbf_view(out_buf + len, "revision", "0x%02x", - rec->type.ct.type.response.revision); - len += zfcp_dbf_view(out_buf + len, "reason_code", "0x%02x", - rec->type.ct.type.response.reason_code); - len += - zfcp_dbf_view(out_buf + len, "reason_code_expl", "0x%02x", - rec->type.ct.type.response.reason_code_expl); - len += - zfcp_dbf_view(out_buf + len, "vendor_unique", "0x%02x", - rec->type.ct.type.response.vendor_unique); - total = rec->type.ct.payload_size; - buffer = rec->type.ct.payload; + } else if (strncmp(r->tag, "rctc", ZFCP_DBF_TAG_SIZE) == 0) { + zfcp_dbf_out(&p, "cmd_rsp_code", "0x%04x", + r->type.ct.type.response.cmd_rsp_code); + zfcp_dbf_out(&p, "revision", "0x%02x", + r->type.ct.type.response.revision); + zfcp_dbf_out(&p, "reason_code", "0x%02x", + r->type.ct.type.response.reason_code); + zfcp_dbf_out(&p, "reason_code_expl", "0x%02x", + r->type.ct.type.response.reason_code_expl); + zfcp_dbf_out(&p, "vendor_unique", "0x%02x", + r->type.ct.type.response.vendor_unique); + total = r->type.ct.payload_size; + buffer = r->type.ct.payload; buflen = min(total, ZFCP_DBF_CT_PAYLOAD); - } else if (strncmp(rec->tag, "oels", ZFCP_DBF_TAG_SIZE) == 0 || - strncmp(rec->tag, "rels", ZFCP_DBF_TAG_SIZE) == 0 || - strncmp(rec->tag, "iels", ZFCP_DBF_TAG_SIZE) == 0) { - len += zfcp_dbf_view(out_buf + len, "ls_code", "0x%02x", - rec->type.els.ls_code); - total = rec->type.els.payload_size; - buffer = rec->type.els.payload; + } else if (strncmp(r->tag, "oels", ZFCP_DBF_TAG_SIZE) == 0 || + strncmp(r->tag, "rels", ZFCP_DBF_TAG_SIZE) == 0 || + strncmp(r->tag, "iels", ZFCP_DBF_TAG_SIZE) == 0) { + zfcp_dbf_out(&p, "ls_code", "0x%02x", r->type.els.ls_code); + total = r->type.els.payload_size; + buffer = r->type.els.payload; buflen = min(total, ZFCP_DBF_ELS_PAYLOAD); } - len += zfcp_dbf_view_dump(out_buf + len, "payload", - buffer, buflen, 0, total); - + p += zfcp_dbf_view_dump(p, "payload", buffer, buflen, 0, total); if (buflen == total) - len += sprintf(out_buf + len, "\n"); + p += sprintf(p, "\n"); - return len; + return p - out_buf; } static struct debug_view zfcp_san_dbf_view = { @@ -1218,71 +1180,51 @@ static int zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, char *out_buf, const char *in_buf) { - struct zfcp_scsi_dbf_record *rec = - (struct zfcp_scsi_dbf_record *)in_buf; - int len = 0; + struct zfcp_scsi_dbf_record *r = (struct zfcp_scsi_dbf_record *)in_buf; struct timespec t; + char *p = out_buf; - if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) + if (strncmp(r->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) return 0; - len += zfcp_dbf_tag(out_buf + len, "tag", rec->tag); - len += zfcp_dbf_tag(out_buf + len, "tag2", rec->tag2); - len += zfcp_dbf_view(out_buf + len, "scsi_id", "0x%08x", rec->scsi_id); - len += zfcp_dbf_view(out_buf + len, "scsi_lun", "0x%08x", - rec->scsi_lun); - len += zfcp_dbf_view(out_buf + len, "scsi_result", "0x%08x", - rec->scsi_result); - len += zfcp_dbf_view(out_buf + len, "scsi_cmnd", "0x%0Lx", - rec->scsi_cmnd); - len += zfcp_dbf_view(out_buf + len, "scsi_serial", "0x%016Lx", - rec->scsi_serial); - len += zfcp_dbf_view_dump(out_buf + len, "scsi_opcode", - rec->scsi_opcode, - ZFCP_DBF_SCSI_OPCODE, - 0, ZFCP_DBF_SCSI_OPCODE); - len += zfcp_dbf_view(out_buf + len, "scsi_retries", "0x%02x", - rec->scsi_retries); - len += zfcp_dbf_view(out_buf + len, "scsi_allowed", "0x%02x", - rec->scsi_allowed); - if (strncmp(rec->tag, "abrt", ZFCP_DBF_TAG_SIZE) == 0) { - len += zfcp_dbf_view(out_buf + len, "old_fsf_reqid", "0x%0Lx", - rec->type.old_fsf_reqid); - } - len += zfcp_dbf_view(out_buf + len, "fsf_reqid", "0x%0Lx", - rec->fsf_reqid); - len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", - rec->fsf_seqno); - zfcp_dbf_timestamp(rec->fsf_issued, &t); - len += zfcp_dbf_view(out_buf + len, "fsf_issued", "%011lu:%06lu", - t.tv_sec, t.tv_nsec); - if (strncmp(rec->tag, "rslt", ZFCP_DBF_TAG_SIZE) == 0) { - len += - zfcp_dbf_view(out_buf + len, "fcp_rsp_validity", "0x%02x", - rec->type.fcp.rsp_validity); - len += - zfcp_dbf_view(out_buf + len, "fcp_rsp_scsi_status", - "0x%02x", rec->type.fcp.rsp_scsi_status); - len += - zfcp_dbf_view(out_buf + len, "fcp_rsp_resid", "0x%08x", - rec->type.fcp.rsp_resid); - len += - zfcp_dbf_view(out_buf + len, "fcp_rsp_code", "0x%08x", - rec->type.fcp.rsp_code); - len += - zfcp_dbf_view(out_buf + len, "fcp_sns_info_len", "0x%08x", - rec->type.fcp.sns_info_len); - len += - zfcp_dbf_view_dump(out_buf + len, "fcp_sns_info", - rec->type.fcp.sns_info, - min((int)rec->type.fcp.sns_info_len, - ZFCP_DBF_SCSI_FCP_SNS_INFO), 0, - rec->type.fcp.sns_info_len); + p += zfcp_dbf_tag(p, "tag", r->tag); + p += zfcp_dbf_tag(p, "tag2", r->tag2); + zfcp_dbf_out(&p, "scsi_id", "0x%08x", r->scsi_id); + zfcp_dbf_out(&p, "scsi_lun", "0x%08x", r->scsi_lun); + zfcp_dbf_out(&p, "scsi_result", "0x%08x", r->scsi_result); + zfcp_dbf_out(&p, "scsi_cmnd", "0x%0Lx", r->scsi_cmnd); + zfcp_dbf_out(&p, "scsi_serial", "0x%016Lx", r->scsi_serial); + p += zfcp_dbf_view_dump(p, "scsi_opcode", r->scsi_opcode, + ZFCP_DBF_SCSI_OPCODE, 0, ZFCP_DBF_SCSI_OPCODE); + zfcp_dbf_out(&p, "scsi_retries", "0x%02x", r->scsi_retries); + zfcp_dbf_out(&p, "scsi_allowed", "0x%02x", r->scsi_allowed); + if (strncmp(r->tag, "abrt", ZFCP_DBF_TAG_SIZE) == 0) + zfcp_dbf_out(&p, "old_fsf_reqid", "0x%0Lx", + r->type.old_fsf_reqid); + zfcp_dbf_out(&p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); + zfcp_dbf_out(&p, "fsf_seqno", "0x%08x", r->fsf_seqno); + zfcp_dbf_timestamp(r->fsf_issued, &t); + zfcp_dbf_out(&p, "fsf_issued", "%011lu:%06lu", t.tv_sec, t.tv_nsec); + + if (strncmp(r->tag, "rslt", ZFCP_DBF_TAG_SIZE) == 0) { + zfcp_dbf_out(&p, "fcp_rsp_validity", "0x%02x", + r->type.fcp.rsp_validity); + zfcp_dbf_out(&p, "fcp_rsp_scsi_status", + "0x%02x", r->type.fcp.rsp_scsi_status); + zfcp_dbf_out(&p, "fcp_rsp_resid", "0x%08x", + r->type.fcp.rsp_resid); + zfcp_dbf_out(&p, "fcp_rsp_code", "0x%08x", + r->type.fcp.rsp_code); + zfcp_dbf_out(&p, "fcp_sns_info_len", "0x%08x", + r->type.fcp.sns_info_len); + p += zfcp_dbf_view_dump(p, "fcp_sns_info", + r->type.fcp.sns_info, + min((int)r->type.fcp.sns_info_len, + ZFCP_DBF_SCSI_FCP_SNS_INFO), 0, + r->type.fcp.sns_info_len); } - - len += sprintf(out_buf + len, "\n"); - - return len; + p += sprintf(p, "\n"); + return p - out_buf; } static struct debug_view zfcp_scsi_dbf_view = { -- cgit v1.2.3-59-g8ed1b From c7b7fc8c30df49a4ca5743d5f062666adcc1dc15 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:25 +0200 Subject: [SCSI] zfcp: Remove obsolete output function from debug trace. Remove obsolete output function. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 15 --------------- 1 file changed, 15 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 15b534206b34..427115b17ed4 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -74,21 +74,6 @@ static int zfcp_dbf_tag(char *out_buf, const char *label, const char *tag) return len; } -static int -zfcp_dbf_view(char *out_buf, const char *label, const char *format, ...) -{ - va_list arg; - int len = 0; - - len += sprintf(out_buf + len, "%-24s", label); - va_start(arg, format); - len += vsprintf(out_buf + len, format, arg); - va_end(arg); - len += sprintf(out_buf + len, "\n"); - - return len; -} - static void zfcp_dbf_outs(char **buf, const char *s1, const char *s2) { *buf += sprintf(*buf, "%-24s%s\n", s1, s2); -- cgit v1.2.3-59-g8ed1b From df29f4ac4d3e8fcc8d8c85b7aeb8cc0df2a3f68a Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:26 +0200 Subject: [SCSI] zfcp: Simplify usage of hex dump output function for debug trace. Simplify usage of output function for hex dumps. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 69 ++++++++++++++++++-------------------------- 1 file changed, 28 insertions(+), 41 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 427115b17ed4..0341fc5e06ce 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -90,33 +90,26 @@ static void zfcp_dbf_out(char **buf, const char *s, const char *format, ...) *buf += sprintf(*buf, "\n"); } -static int -zfcp_dbf_view_dump(char *out_buf, const char *label, - char *buffer, int buflen, int offset, int total_size) +static void zfcp_dbf_outd(char **p, const char *label, char *buffer, + int buflen, int offset, int total_size) { - int len = 0; - - if (offset == 0) - len += sprintf(out_buf + len, "%-24s ", label); - + if (!offset) + *p += sprintf(*p, "%-24s ", label); while (buflen--) { if (offset > 0) { if ((offset % 32) == 0) - len += sprintf(out_buf + len, "\n%-24c ", ' '); + *p += sprintf(*p, "\n%-24c ", ' '); else if ((offset % 4) == 0) - len += sprintf(out_buf + len, " "); + *p += sprintf(*p, " "); } - len += sprintf(out_buf + len, "%02x", *buffer++); + *p += sprintf(*p, "%02x", *buffer++); if (++offset == total_size) { - len += sprintf(out_buf + len, "\n"); + *p += sprintf(*p, "\n"); break; } } - - if (total_size == 0) - len += sprintf(out_buf + len, "\n"); - - return len; + if (!total_size) + *p += sprintf(*p, "\n"); } static int @@ -133,8 +126,8 @@ zfcp_dbf_view_header(debug_info_t * id, struct debug_view *view, int area, t.tv_sec, t.tv_nsec); zfcp_dbf_out(&p, "cpu", "%02i", entry->id.fields.cpuid); } else { - p += zfcp_dbf_view_dump(p, NULL, dump->data, dump->size, - dump->offset, dump->total_size); + zfcp_dbf_outd(&p, NULL, dump->data, dump->size, dump->offset, + dump->total_size); if ((dump->offset + dump->size) == dump->total_size) p += sprintf(p, "\n"); } @@ -348,14 +341,10 @@ static int zfcp_hba_dbf_view_response(char *buf, zfcp_dbf_out(&p, "fsf_issued", "%011lu:%06lu", t.tv_sec, t.tv_nsec); zfcp_dbf_out(&p, "fsf_prot_status", "0x%08x", r->fsf_prot_status); zfcp_dbf_out(&p, "fsf_status", "0x%08x", r->fsf_status); - p += zfcp_dbf_view_dump(p, "fsf_prot_status_qual", - r->fsf_prot_status_qual, - FSF_PROT_STATUS_QUAL_SIZE, - 0, FSF_PROT_STATUS_QUAL_SIZE); - p += zfcp_dbf_view_dump(p, "fsf_status_qual", - r->fsf_status_qual, - FSF_STATUS_QUALIFIER_SIZE, - 0, FSF_STATUS_QUALIFIER_SIZE); + zfcp_dbf_outd(&p, "fsf_prot_status_qual", r->fsf_prot_status_qual, + FSF_PROT_STATUS_QUAL_SIZE, 0, FSF_PROT_STATUS_QUAL_SIZE); + zfcp_dbf_outd(&p, "fsf_status_qual", r->fsf_status_qual, + FSF_STATUS_QUALIFIER_SIZE, 0, FSF_STATUS_QUALIFIER_SIZE); zfcp_dbf_out(&p, "fsf_req_status", "0x%08x", r->fsf_req_status); zfcp_dbf_out(&p, "sbal_first", "0x%02x", r->sbal_first); zfcp_dbf_out(&p, "sbal_curr", "0x%02x", r->sbal_curr); @@ -415,12 +404,11 @@ static int zfcp_hba_dbf_view_status(char *buf, zfcp_dbf_out(&p, "failed", "0x%02x", r->failed); zfcp_dbf_out(&p, "status_type", "0x%08x", r->status_type); zfcp_dbf_out(&p, "status_subtype", "0x%08x", r->status_subtype); - p += zfcp_dbf_view_dump(p, "queue_designator", - (char *)&r->queue_designator, - sizeof(struct fsf_queue_designator), - 0, sizeof(struct fsf_queue_designator)); - p += zfcp_dbf_view_dump(p, "payload", (char *)&r->payload, - r->payload_size, 0, r->payload_size); + zfcp_dbf_outd(&p, "queue_designator", (char *)&r->queue_designator, + sizeof(struct fsf_queue_designator), 0, + sizeof(struct fsf_queue_designator)); + zfcp_dbf_outd(&p, "payload", (char *)&r->payload, r->payload_size, 0, + r->payload_size); return p - buf; } @@ -1031,7 +1019,7 @@ zfcp_san_dbf_view_format(debug_info_t * id, struct debug_view *view, buflen = min(total, ZFCP_DBF_ELS_PAYLOAD); } - p += zfcp_dbf_view_dump(p, "payload", buffer, buflen, 0, total); + zfcp_dbf_outd(&p, "payload", buffer, buflen, 0, total); if (buflen == total) p += sprintf(p, "\n"); @@ -1179,8 +1167,8 @@ zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, zfcp_dbf_out(&p, "scsi_result", "0x%08x", r->scsi_result); zfcp_dbf_out(&p, "scsi_cmnd", "0x%0Lx", r->scsi_cmnd); zfcp_dbf_out(&p, "scsi_serial", "0x%016Lx", r->scsi_serial); - p += zfcp_dbf_view_dump(p, "scsi_opcode", r->scsi_opcode, - ZFCP_DBF_SCSI_OPCODE, 0, ZFCP_DBF_SCSI_OPCODE); + zfcp_dbf_outd(&p, "scsi_opcode", r->scsi_opcode, ZFCP_DBF_SCSI_OPCODE, + 0, ZFCP_DBF_SCSI_OPCODE); zfcp_dbf_out(&p, "scsi_retries", "0x%02x", r->scsi_retries); zfcp_dbf_out(&p, "scsi_allowed", "0x%02x", r->scsi_allowed); if (strncmp(r->tag, "abrt", ZFCP_DBF_TAG_SIZE) == 0) @@ -1202,11 +1190,10 @@ zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, r->type.fcp.rsp_code); zfcp_dbf_out(&p, "fcp_sns_info_len", "0x%08x", r->type.fcp.sns_info_len); - p += zfcp_dbf_view_dump(p, "fcp_sns_info", - r->type.fcp.sns_info, - min((int)r->type.fcp.sns_info_len, - ZFCP_DBF_SCSI_FCP_SNS_INFO), 0, - r->type.fcp.sns_info_len); + zfcp_dbf_outd(&p, "fcp_sns_info", r->type.fcp.sns_info, + min((int)r->type.fcp.sns_info_len, + ZFCP_DBF_SCSI_FCP_SNS_INFO), 0, + r->type.fcp.sns_info_len); } p += sprintf(p, "\n"); return p - out_buf; -- cgit v1.2.3-59-g8ed1b From a9c857757ea09b63040bba7ab149557ac2bfb274 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:27 +0200 Subject: [SCSI] zfcp: Simplify zfcp_dbf_tag and related functions in debug trace. Simplify usage of zfcp_dbf_tag() and calling functions. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 143 ++++++++++++++++++++----------------------- 1 file changed, 65 insertions(+), 78 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 0341fc5e06ce..0ab985c037fe 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -62,16 +62,14 @@ static void zfcp_dbf_timestamp(unsigned long long stck, struct timespec *time) time->tv_nsec = ((stck * 1000) >> 12); } -static int zfcp_dbf_tag(char *out_buf, const char *label, const char *tag) +static void zfcp_dbf_tag(char **p, const char *label, const char *tag) { - int len = 0, i; + int i; - len += sprintf(out_buf + len, "%-24s", label); + *p += sprintf(*p, "%-24s", label); for (i = 0; i < ZFCP_DBF_TAG_SIZE; i++) - len += sprintf(out_buf + len, "%c", tag[i]); - len += sprintf(out_buf + len, "\n"); - - return len; + *p += sprintf(*p, "%c", tag[i]); + *p += sprintf(*p, "\n"); } static void zfcp_dbf_outs(char **buf, const char *s1, const char *s2) @@ -328,61 +326,60 @@ zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } -static int zfcp_hba_dbf_view_response(char *buf, - struct zfcp_hba_dbf_record_response *r) +static void zfcp_hba_dbf_view_response(char **p, + struct zfcp_hba_dbf_record_response *r) { struct timespec t; - char *p = buf; - zfcp_dbf_out(&p, "fsf_command", "0x%08x", r->fsf_command); - zfcp_dbf_out(&p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); - zfcp_dbf_out(&p, "fsf_seqno", "0x%08x", r->fsf_seqno); + zfcp_dbf_out(p, "fsf_command", "0x%08x", r->fsf_command); + zfcp_dbf_out(p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); + zfcp_dbf_out(p, "fsf_seqno", "0x%08x", r->fsf_seqno); zfcp_dbf_timestamp(r->fsf_issued, &t); - zfcp_dbf_out(&p, "fsf_issued", "%011lu:%06lu", t.tv_sec, t.tv_nsec); - zfcp_dbf_out(&p, "fsf_prot_status", "0x%08x", r->fsf_prot_status); - zfcp_dbf_out(&p, "fsf_status", "0x%08x", r->fsf_status); - zfcp_dbf_outd(&p, "fsf_prot_status_qual", r->fsf_prot_status_qual, + zfcp_dbf_out(p, "fsf_issued", "%011lu:%06lu", t.tv_sec, t.tv_nsec); + zfcp_dbf_out(p, "fsf_prot_status", "0x%08x", r->fsf_prot_status); + zfcp_dbf_out(p, "fsf_status", "0x%08x", r->fsf_status); + zfcp_dbf_outd(p, "fsf_prot_status_qual", r->fsf_prot_status_qual, FSF_PROT_STATUS_QUAL_SIZE, 0, FSF_PROT_STATUS_QUAL_SIZE); - zfcp_dbf_outd(&p, "fsf_status_qual", r->fsf_status_qual, + zfcp_dbf_outd(p, "fsf_status_qual", r->fsf_status_qual, FSF_STATUS_QUALIFIER_SIZE, 0, FSF_STATUS_QUALIFIER_SIZE); - zfcp_dbf_out(&p, "fsf_req_status", "0x%08x", r->fsf_req_status); - zfcp_dbf_out(&p, "sbal_first", "0x%02x", r->sbal_first); - zfcp_dbf_out(&p, "sbal_curr", "0x%02x", r->sbal_curr); - zfcp_dbf_out(&p, "sbal_last", "0x%02x", r->sbal_last); - zfcp_dbf_out(&p, "pool", "0x%02x", r->pool); + zfcp_dbf_out(p, "fsf_req_status", "0x%08x", r->fsf_req_status); + zfcp_dbf_out(p, "sbal_first", "0x%02x", r->sbal_first); + zfcp_dbf_out(p, "sbal_curr", "0x%02x", r->sbal_curr); + zfcp_dbf_out(p, "sbal_last", "0x%02x", r->sbal_last); + zfcp_dbf_out(p, "pool", "0x%02x", r->pool); switch (r->fsf_command) { case FSF_QTCB_FCP_CMND: if (r->fsf_req_status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) break; - zfcp_dbf_out(&p, "scsi_cmnd", "0x%0Lx", + zfcp_dbf_out(p, "scsi_cmnd", "0x%0Lx", r->data.send_fcp.scsi_cmnd); - zfcp_dbf_out(&p, "scsi_serial", "0x%016Lx", + zfcp_dbf_out(p, "scsi_serial", "0x%016Lx", r->data.send_fcp.scsi_serial); break; case FSF_QTCB_OPEN_PORT_WITH_DID: case FSF_QTCB_CLOSE_PORT: case FSF_QTCB_CLOSE_PHYSICAL_PORT: - zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->data.port.wwpn); - zfcp_dbf_out(&p, "d_id", "0x%06x", r->data.port.d_id); - zfcp_dbf_out(&p, "port_handle", "0x%08x", + zfcp_dbf_out(p, "wwpn", "0x%016Lx", r->data.port.wwpn); + zfcp_dbf_out(p, "d_id", "0x%06x", r->data.port.d_id); + zfcp_dbf_out(p, "port_handle", "0x%08x", r->data.port.port_handle); break; case FSF_QTCB_OPEN_LUN: case FSF_QTCB_CLOSE_LUN: - zfcp_dbf_out(&p, "wwpn", "0x%016Lx", r->data.unit.wwpn); - zfcp_dbf_out(&p, "fcp_lun", "0x%016Lx", r->data.unit.fcp_lun); - zfcp_dbf_out(&p, "port_handle", "0x%08x", + zfcp_dbf_out(p, "wwpn", "0x%016Lx", r->data.unit.wwpn); + zfcp_dbf_out(p, "fcp_lun", "0x%016Lx", r->data.unit.fcp_lun); + zfcp_dbf_out(p, "port_handle", "0x%08x", r->data.unit.port_handle); - zfcp_dbf_out(&p, "lun_handle", "0x%08x", + zfcp_dbf_out(p, "lun_handle", "0x%08x", r->data.unit.lun_handle); break; case FSF_QTCB_SEND_ELS: - zfcp_dbf_out(&p, "d_id", "0x%06x", r->data.send_els.d_id); - zfcp_dbf_out(&p, "ls_code", "0x%02x", r->data.send_els.ls_code); + zfcp_dbf_out(p, "d_id", "0x%06x", r->data.send_els.d_id); + zfcp_dbf_out(p, "ls_code", "0x%02x", r->data.send_els.ls_code); break; case FSF_QTCB_ABORT_FCP_CMND: @@ -393,62 +390,52 @@ static int zfcp_hba_dbf_view_response(char *buf, case FSF_QTCB_UPLOAD_CONTROL_FILE: break; } - return p - buf; } -static int zfcp_hba_dbf_view_status(char *buf, - struct zfcp_hba_dbf_record_status *r) +static void zfcp_hba_dbf_view_status(char **p, + struct zfcp_hba_dbf_record_status *r) { - char *p = buf; - - zfcp_dbf_out(&p, "failed", "0x%02x", r->failed); - zfcp_dbf_out(&p, "status_type", "0x%08x", r->status_type); - zfcp_dbf_out(&p, "status_subtype", "0x%08x", r->status_subtype); - zfcp_dbf_outd(&p, "queue_designator", (char *)&r->queue_designator, + zfcp_dbf_out(p, "failed", "0x%02x", r->failed); + zfcp_dbf_out(p, "status_type", "0x%08x", r->status_type); + zfcp_dbf_out(p, "status_subtype", "0x%08x", r->status_subtype); + zfcp_dbf_outd(p, "queue_designator", (char *)&r->queue_designator, sizeof(struct fsf_queue_designator), 0, sizeof(struct fsf_queue_designator)); - zfcp_dbf_outd(&p, "payload", (char *)&r->payload, r->payload_size, 0, + zfcp_dbf_outd(p, "payload", (char *)&r->payload, r->payload_size, 0, r->payload_size); - return p - buf; } -static int zfcp_hba_dbf_view_qdio(char *buf, struct zfcp_hba_dbf_record_qdio *r) +static void zfcp_hba_dbf_view_qdio(char **p, struct zfcp_hba_dbf_record_qdio *r) { - char *p = buf; - - zfcp_dbf_out(&p, "status", "0x%08x", r->status); - zfcp_dbf_out(&p, "qdio_error", "0x%08x", r->qdio_error); - zfcp_dbf_out(&p, "siga_error", "0x%08x", r->siga_error); - zfcp_dbf_out(&p, "sbal_index", "0x%02x", r->sbal_index); - zfcp_dbf_out(&p, "sbal_count", "0x%02x", r->sbal_count); - return p - buf; + zfcp_dbf_out(p, "status", "0x%08x", r->status); + zfcp_dbf_out(p, "qdio_error", "0x%08x", r->qdio_error); + zfcp_dbf_out(p, "siga_error", "0x%08x", r->siga_error); + zfcp_dbf_out(p, "sbal_index", "0x%02x", r->sbal_index); + zfcp_dbf_out(p, "sbal_count", "0x%02x", r->sbal_count); } -static int -zfcp_hba_dbf_view_format(debug_info_t * id, struct debug_view *view, - char *out_buf, const char *in_buf) +static int zfcp_hba_dbf_view_format(debug_info_t *id, struct debug_view *view, + char *out_buf, const char *in_buf) { - struct zfcp_hba_dbf_record *rec = (struct zfcp_hba_dbf_record *)in_buf; - int len = 0; + struct zfcp_hba_dbf_record *r = (struct zfcp_hba_dbf_record *)in_buf; + char *p = out_buf; - if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) + if (strncmp(r->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) return 0; - len += zfcp_dbf_tag(out_buf + len, "tag", rec->tag); - if (isalpha(rec->tag2[0])) - len += zfcp_dbf_tag(out_buf + len, "tag2", rec->tag2); - if (strncmp(rec->tag, "resp", ZFCP_DBF_TAG_SIZE) == 0) - len += zfcp_hba_dbf_view_response(out_buf + len, - &rec->type.response); - else if (strncmp(rec->tag, "stat", ZFCP_DBF_TAG_SIZE) == 0) - len += zfcp_hba_dbf_view_status(out_buf + len, - &rec->type.status); - else if (strncmp(rec->tag, "qdio", ZFCP_DBF_TAG_SIZE) == 0) - len += zfcp_hba_dbf_view_qdio(out_buf + len, &rec->type.qdio); - - len += sprintf(out_buf + len, "\n"); - - return len; + zfcp_dbf_tag(&p, "tag", r->tag); + if (isalpha(r->tag2[0])) + zfcp_dbf_tag(&p, "tag2", r->tag2); + + if (strncmp(r->tag, "resp", ZFCP_DBF_TAG_SIZE) == 0) + zfcp_hba_dbf_view_response(&p, &r->type.response); + else if (strncmp(r->tag, "stat", ZFCP_DBF_TAG_SIZE) == 0) + zfcp_hba_dbf_view_status(&p, &r->type.status); + else if (strncmp(r->tag, "qdio", ZFCP_DBF_TAG_SIZE) == 0) + zfcp_hba_dbf_view_qdio(&p, &r->type.qdio); + + p += sprintf(p, "\n"); + return p - out_buf; } static struct debug_view zfcp_hba_dbf_view = { @@ -973,7 +960,7 @@ zfcp_san_dbf_view_format(debug_info_t * id, struct debug_view *view, if (strncmp(r->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) return 0; - p += zfcp_dbf_tag(p, "tag", r->tag); + zfcp_dbf_tag(&p, "tag", r->tag); zfcp_dbf_out(&p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); zfcp_dbf_out(&p, "fsf_seqno", "0x%08x", r->fsf_seqno); zfcp_dbf_out(&p, "s_id", "0x%06x", r->s_id); @@ -1160,8 +1147,8 @@ zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, if (strncmp(r->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) return 0; - p += zfcp_dbf_tag(p, "tag", r->tag); - p += zfcp_dbf_tag(p, "tag2", r->tag2); + zfcp_dbf_tag(&p, "tag", r->tag); + zfcp_dbf_tag(&p, "tag2", r->tag2); zfcp_dbf_out(&p, "scsi_id", "0x%08x", r->scsi_id); zfcp_dbf_out(&p, "scsi_lun", "0x%08x", r->scsi_lun); zfcp_dbf_out(&p, "scsi_result", "0x%08x", r->scsi_result); -- cgit v1.2.3-59-g8ed1b From 2b604c9b909ce1c98e51208eee2f70ee3e604079 Mon Sep 17 00:00:00 2001 From: Christof Schmitt Date: Mon, 31 Mar 2008 11:15:28 +0200 Subject: [SCSI] zfcp: Move DBF definitions to private header file Unclutter the global zfcp_def.h header. Move everything required to call into the debug feature to a new header file. Signed-off-by: Christof Schmitt Signed-off-by: Martin Peschke Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.h | 233 +++++++++++++++++++++++++++++++++++++++++++ drivers/s390/scsi/zfcp_def.h | 210 +------------------------------------- 2 files changed, 234 insertions(+), 209 deletions(-) create mode 100644 drivers/s390/scsi/zfcp_dbf.h (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h new file mode 100644 index 000000000000..5d88c01d5981 --- /dev/null +++ b/drivers/s390/scsi/zfcp_dbf.h @@ -0,0 +1,233 @@ +/* + * This file is part of the zfcp device driver for + * FCP adapters for IBM System z9 and zSeries. + * + * Copyright IBM Corp. 2008, 2008 + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2, or (at your option) + * any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + */ + +#ifndef ZFCP_DBF_H +#define ZFCP_DBF_H + +#include "zfcp_fsf.h" + +#define ZFCP_DBF_TAG_SIZE 4 + +struct zfcp_dbf_dump { + u8 tag[ZFCP_DBF_TAG_SIZE]; + u32 total_size; /* size of total dump data */ + u32 offset; /* how much data has being already dumped */ + u32 size; /* how much data comes with this record */ + u8 data[]; /* dump data */ +} __attribute__ ((packed)); + +struct zfcp_rec_dbf_record_thread { + u32 sema; + u32 total; + u32 ready; + u32 running; +} __attribute__ ((packed)); + +struct zfcp_rec_dbf_record_target { + u64 ref; + u32 status; + u32 d_id; + u64 wwpn; + u64 fcp_lun; + u32 erp_count; +} __attribute__ ((packed)); + +struct zfcp_rec_dbf_record_trigger { + u8 want; + u8 need; + u32 as; + u32 ps; + u32 us; + u64 ref; + u64 action; + u64 wwpn; + u64 fcp_lun; +} __attribute__ ((packed)); + +struct zfcp_rec_dbf_record_action { + u32 status; + u32 step; + u64 action; + u64 fsf_req; +} __attribute__ ((packed)); + +struct zfcp_rec_dbf_record { + u8 id; + u8 id2; + union { + struct zfcp_rec_dbf_record_action action; + struct zfcp_rec_dbf_record_thread thread; + struct zfcp_rec_dbf_record_target target; + struct zfcp_rec_dbf_record_trigger trigger; + } u; +} __attribute__ ((packed)); + +enum { + ZFCP_REC_DBF_ID_ACTION, + ZFCP_REC_DBF_ID_THREAD, + ZFCP_REC_DBF_ID_TARGET, + ZFCP_REC_DBF_ID_TRIGGER, +}; + +struct zfcp_hba_dbf_record_response { + u32 fsf_command; + u64 fsf_reqid; + u32 fsf_seqno; + u64 fsf_issued; + u32 fsf_prot_status; + u32 fsf_status; + u8 fsf_prot_status_qual[FSF_PROT_STATUS_QUAL_SIZE]; + u8 fsf_status_qual[FSF_STATUS_QUALIFIER_SIZE]; + u32 fsf_req_status; + u8 sbal_first; + u8 sbal_curr; + u8 sbal_last; + u8 pool; + u64 erp_action; + union { + struct { + u64 scsi_cmnd; + u64 scsi_serial; + } send_fcp; + struct { + u64 wwpn; + u32 d_id; + u32 port_handle; + } port; + struct { + u64 wwpn; + u64 fcp_lun; + u32 port_handle; + u32 lun_handle; + } unit; + struct { + u32 d_id; + u8 ls_code; + } send_els; + } data; +} __attribute__ ((packed)); + +struct zfcp_hba_dbf_record_status { + u8 failed; + u32 status_type; + u32 status_subtype; + struct fsf_queue_designator + queue_designator; + u32 payload_size; +#define ZFCP_DBF_UNSOL_PAYLOAD 80 +#define ZFCP_DBF_UNSOL_PAYLOAD_SENSE_DATA_AVAIL 32 +#define ZFCP_DBF_UNSOL_PAYLOAD_BIT_ERROR_THRESHOLD 56 +#define ZFCP_DBF_UNSOL_PAYLOAD_FEATURE_UPDATE_ALERT 2 * sizeof(u32) + u8 payload[ZFCP_DBF_UNSOL_PAYLOAD]; +} __attribute__ ((packed)); + +struct zfcp_hba_dbf_record_qdio { + u32 status; + u32 qdio_error; + u32 siga_error; + u8 sbal_index; + u8 sbal_count; +} __attribute__ ((packed)); + +struct zfcp_hba_dbf_record { + u8 tag[ZFCP_DBF_TAG_SIZE]; + u8 tag2[ZFCP_DBF_TAG_SIZE]; + union { + struct zfcp_hba_dbf_record_response response; + struct zfcp_hba_dbf_record_status status; + struct zfcp_hba_dbf_record_qdio qdio; + } type; +} __attribute__ ((packed)); + +struct zfcp_san_dbf_record_ct { + union { + struct { + u16 cmd_req_code; + u8 revision; + u8 gs_type; + u8 gs_subtype; + u8 options; + u16 max_res_size; + } request; + struct { + u16 cmd_rsp_code; + u8 revision; + u8 reason_code; + u8 reason_code_expl; + u8 vendor_unique; + } response; + } type; + u32 payload_size; +#define ZFCP_DBF_CT_PAYLOAD 24 + u8 payload[ZFCP_DBF_CT_PAYLOAD]; +} __attribute__ ((packed)); + +struct zfcp_san_dbf_record_els { + u8 ls_code; + u32 payload_size; +#define ZFCP_DBF_ELS_PAYLOAD 32 +#define ZFCP_DBF_ELS_MAX_PAYLOAD 1024 + u8 payload[ZFCP_DBF_ELS_PAYLOAD]; +} __attribute__ ((packed)); + +struct zfcp_san_dbf_record { + u8 tag[ZFCP_DBF_TAG_SIZE]; + u64 fsf_reqid; + u32 fsf_seqno; + u32 s_id; + u32 d_id; + union { + struct zfcp_san_dbf_record_ct ct; + struct zfcp_san_dbf_record_els els; + } type; +} __attribute__ ((packed)); + +struct zfcp_scsi_dbf_record { + u8 tag[ZFCP_DBF_TAG_SIZE]; + u8 tag2[ZFCP_DBF_TAG_SIZE]; + u32 scsi_id; + u32 scsi_lun; + u32 scsi_result; + u64 scsi_cmnd; + u64 scsi_serial; +#define ZFCP_DBF_SCSI_OPCODE 16 + u8 scsi_opcode[ZFCP_DBF_SCSI_OPCODE]; + u8 scsi_retries; + u8 scsi_allowed; + u64 fsf_reqid; + u32 fsf_seqno; + u64 fsf_issued; + union { + u64 old_fsf_reqid; + struct { + u8 rsp_validity; + u8 rsp_scsi_status; + u32 rsp_resid; + u8 rsp_code; +#define ZFCP_DBF_SCSI_FCP_SNS_INFO 16 +#define ZFCP_DBF_SCSI_MAX_FCP_SNS_INFO 256 + u32 sns_info_len; + u8 sns_info[ZFCP_DBF_SCSI_FCP_SNS_INFO]; + } fcp; + } type; +} __attribute__ ((packed)); + +#endif /* ZFCP_DBF_H */ diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 85c0488719f7..ac23be50fd74 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -47,6 +47,7 @@ #include #include #include +#include "zfcp_dbf.h" #include "zfcp_fsf.h" @@ -261,215 +262,6 @@ struct fcp_logo { wwn_t nport_wwpn; } __attribute__((packed)); -/* - * DBF stuff - */ -#define ZFCP_DBF_TAG_SIZE 4 - -struct zfcp_dbf_dump { - u8 tag[ZFCP_DBF_TAG_SIZE]; - u32 total_size; /* size of total dump data */ - u32 offset; /* how much data has being already dumped */ - u32 size; /* how much data comes with this record */ - u8 data[]; /* dump data */ -} __attribute__ ((packed)); - -struct zfcp_rec_dbf_record_thread { - u32 sema; - u32 total; - u32 ready; - u32 running; -} __attribute__ ((packed)); - -struct zfcp_rec_dbf_record_target { - u64 ref; - u32 status; - u32 d_id; - u64 wwpn; - u64 fcp_lun; - u32 erp_count; -} __attribute__ ((packed)); - -struct zfcp_rec_dbf_record_trigger { - u8 want; - u8 need; - u32 as; - u32 ps; - u32 us; - u64 ref; - u64 action; - u64 wwpn; - u64 fcp_lun; -} __attribute__ ((packed)); - -struct zfcp_rec_dbf_record_action { - u32 status; - u32 step; - u64 action; - u64 fsf_req; -} __attribute__ ((packed)); - -struct zfcp_rec_dbf_record { - u8 id; - u8 id2; - union { - struct zfcp_rec_dbf_record_action action; - struct zfcp_rec_dbf_record_thread thread; - struct zfcp_rec_dbf_record_target target; - struct zfcp_rec_dbf_record_trigger trigger; - } u; -} __attribute__ ((packed)); - -enum { - ZFCP_REC_DBF_ID_ACTION, - ZFCP_REC_DBF_ID_THREAD, - ZFCP_REC_DBF_ID_TARGET, - ZFCP_REC_DBF_ID_TRIGGER, -}; - -struct zfcp_hba_dbf_record_response { - u32 fsf_command; - u64 fsf_reqid; - u32 fsf_seqno; - u64 fsf_issued; - u32 fsf_prot_status; - u32 fsf_status; - u8 fsf_prot_status_qual[FSF_PROT_STATUS_QUAL_SIZE]; - u8 fsf_status_qual[FSF_STATUS_QUALIFIER_SIZE]; - u32 fsf_req_status; - u8 sbal_first; - u8 sbal_curr; - u8 sbal_last; - u8 pool; - u64 erp_action; - union { - struct { - u64 scsi_cmnd; - u64 scsi_serial; - } send_fcp; - struct { - u64 wwpn; - u32 d_id; - u32 port_handle; - } port; - struct { - u64 wwpn; - u64 fcp_lun; - u32 port_handle; - u32 lun_handle; - } unit; - struct { - u32 d_id; - u8 ls_code; - } send_els; - } data; -} __attribute__ ((packed)); - -struct zfcp_hba_dbf_record_status { - u8 failed; - u32 status_type; - u32 status_subtype; - struct fsf_queue_designator - queue_designator; - u32 payload_size; -#define ZFCP_DBF_UNSOL_PAYLOAD 80 -#define ZFCP_DBF_UNSOL_PAYLOAD_SENSE_DATA_AVAIL 32 -#define ZFCP_DBF_UNSOL_PAYLOAD_BIT_ERROR_THRESHOLD 56 -#define ZFCP_DBF_UNSOL_PAYLOAD_FEATURE_UPDATE_ALERT 2 * sizeof(u32) - u8 payload[ZFCP_DBF_UNSOL_PAYLOAD]; -} __attribute__ ((packed)); - -struct zfcp_hba_dbf_record_qdio { - u32 status; - u32 qdio_error; - u32 siga_error; - u8 sbal_index; - u8 sbal_count; -} __attribute__ ((packed)); - -struct zfcp_hba_dbf_record { - u8 tag[ZFCP_DBF_TAG_SIZE]; - u8 tag2[ZFCP_DBF_TAG_SIZE]; - union { - struct zfcp_hba_dbf_record_response response; - struct zfcp_hba_dbf_record_status status; - struct zfcp_hba_dbf_record_qdio qdio; - } type; -} __attribute__ ((packed)); - -struct zfcp_san_dbf_record_ct { - union { - struct { - u16 cmd_req_code; - u8 revision; - u8 gs_type; - u8 gs_subtype; - u8 options; - u16 max_res_size; - } request; - struct { - u16 cmd_rsp_code; - u8 revision; - u8 reason_code; - u8 reason_code_expl; - u8 vendor_unique; - } response; - } type; - u32 payload_size; -#define ZFCP_DBF_CT_PAYLOAD 24 - u8 payload[ZFCP_DBF_CT_PAYLOAD]; -} __attribute__ ((packed)); - -struct zfcp_san_dbf_record_els { - u8 ls_code; - u32 payload_size; -#define ZFCP_DBF_ELS_PAYLOAD 32 -#define ZFCP_DBF_ELS_MAX_PAYLOAD 1024 - u8 payload[ZFCP_DBF_ELS_PAYLOAD]; -} __attribute__ ((packed)); - -struct zfcp_san_dbf_record { - u8 tag[ZFCP_DBF_TAG_SIZE]; - u64 fsf_reqid; - u32 fsf_seqno; - u32 s_id; - u32 d_id; - union { - struct zfcp_san_dbf_record_ct ct; - struct zfcp_san_dbf_record_els els; - } type; -} __attribute__ ((packed)); - -struct zfcp_scsi_dbf_record { - u8 tag[ZFCP_DBF_TAG_SIZE]; - u8 tag2[ZFCP_DBF_TAG_SIZE]; - u32 scsi_id; - u32 scsi_lun; - u32 scsi_result; - u64 scsi_cmnd; - u64 scsi_serial; -#define ZFCP_DBF_SCSI_OPCODE 16 - u8 scsi_opcode[ZFCP_DBF_SCSI_OPCODE]; - u8 scsi_retries; - u8 scsi_allowed; - u64 fsf_reqid; - u32 fsf_seqno; - u64 fsf_issued; - union { - u64 old_fsf_reqid; - struct { - u8 rsp_validity; - u8 rsp_scsi_status; - u32 rsp_resid; - u8 rsp_code; -#define ZFCP_DBF_SCSI_FCP_SNS_INFO 16 -#define ZFCP_DBF_SCSI_MAX_FCP_SNS_INFO 256 - u32 sns_info_len; - u8 sns_info[ZFCP_DBF_SCSI_FCP_SNS_INFO]; - } fcp; - } type; -} __attribute__ ((packed)); - /* * FC-FS stuff */ -- cgit v1.2.3-59-g8ed1b From 6bc473dd324237acbaa7a4c5e73d00dd5fc389ec Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:29 +0200 Subject: [SCSI] zfcp: Shorten excessive names in debug trace. Saving on line breaks, improving readability, by shortening excessive function names and identifiers, by simplifying some functions call chains, and by simplifying nesting of some data structure. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 305 +++++++++++++++++++------------------------ drivers/s390/scsi/zfcp_dbf.h | 74 +++++------ 2 files changed, 170 insertions(+), 209 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 0ab985c037fe..a39a3e33a5b6 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -144,12 +144,12 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) struct zfcp_unit *unit; struct zfcp_send_els *send_els; struct zfcp_hba_dbf_record *rec = &adapter->hba_dbf_buf; - struct zfcp_hba_dbf_record_response *response = &rec->type.response; + struct zfcp_hba_dbf_record_response *response = &rec->u.response; int level; unsigned long flags; spin_lock_irqsave(&adapter->hba_dbf_lock, flags); - memset(rec, 0, sizeof(struct zfcp_hba_dbf_record)); + memset(rec, 0, sizeof(*rec)); strncpy(rec->tag, "resp", ZFCP_DBF_TAG_SIZE); if ((qtcb->prefix.prot_status != FSF_PROT_GOOD) && @@ -193,11 +193,9 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) if (fsf_req->status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) break; scsi_cmnd = (struct scsi_cmnd *)fsf_req->data; - if (scsi_cmnd != NULL) { - response->data.send_fcp.scsi_cmnd - = (unsigned long)scsi_cmnd; - response->data.send_fcp.scsi_serial - = scsi_cmnd->serial_number; + if (scsi_cmnd) { + response->u.fcp.cmnd = (unsigned long)scsi_cmnd; + response->u.fcp.serial = scsi_cmnd->serial_number; } break; @@ -205,25 +203,25 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) case FSF_QTCB_CLOSE_PORT: case FSF_QTCB_CLOSE_PHYSICAL_PORT: port = (struct zfcp_port *)fsf_req->data; - response->data.port.wwpn = port->wwpn; - response->data.port.d_id = port->d_id; - response->data.port.port_handle = qtcb->header.port_handle; + response->u.port.wwpn = port->wwpn; + response->u.port.d_id = port->d_id; + response->u.port.port_handle = qtcb->header.port_handle; break; case FSF_QTCB_OPEN_LUN: case FSF_QTCB_CLOSE_LUN: unit = (struct zfcp_unit *)fsf_req->data; port = unit->port; - response->data.unit.wwpn = port->wwpn; - response->data.unit.fcp_lun = unit->fcp_lun; - response->data.unit.port_handle = qtcb->header.port_handle; - response->data.unit.lun_handle = qtcb->header.lun_handle; + response->u.unit.wwpn = port->wwpn; + response->u.unit.fcp_lun = unit->fcp_lun; + response->u.unit.port_handle = qtcb->header.port_handle; + response->u.unit.lun_handle = qtcb->header.lun_handle; break; case FSF_QTCB_SEND_ELS: send_els = (struct zfcp_send_els *)fsf_req->data; - response->data.send_els.d_id = qtcb->bottom.support.d_id; - response->data.send_els.ls_code = send_els->ls_code >> 24; + response->u.els.d_id = qtcb->bottom.support.d_id; + response->u.els.ls_code = send_els->ls_code >> 24; break; case FSF_QTCB_ABORT_FCP_CMND: @@ -235,8 +233,7 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) break; } - debug_event(adapter->hba_dbf, level, - rec, sizeof(struct zfcp_hba_dbf_record)); + debug_event(adapter->hba_dbf, level, rec, sizeof(*rec)); /* have fcp channel microcode fixed to use as little as possible */ if (fsf_req->fsf_command != FSF_QTCB_FCP_CMND) { @@ -259,26 +256,26 @@ zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, unsigned long flags; spin_lock_irqsave(&adapter->hba_dbf_lock, flags); - memset(rec, 0, sizeof(struct zfcp_hba_dbf_record)); + memset(rec, 0, sizeof(*rec)); strncpy(rec->tag, "stat", ZFCP_DBF_TAG_SIZE); strncpy(rec->tag2, tag, ZFCP_DBF_TAG_SIZE); - rec->type.status.failed = adapter->status_read_failed; + rec->u.status.failed = adapter->status_read_failed; if (status_buffer != NULL) { - rec->type.status.status_type = status_buffer->status_type; - rec->type.status.status_subtype = status_buffer->status_subtype; - memcpy(&rec->type.status.queue_designator, + rec->u.status.status_type = status_buffer->status_type; + rec->u.status.status_subtype = status_buffer->status_subtype; + memcpy(&rec->u.status.queue_designator, &status_buffer->queue_designator, sizeof(struct fsf_queue_designator)); switch (status_buffer->status_type) { case FSF_STATUS_READ_SENSE_DATA_AVAIL: - rec->type.status.payload_size = + rec->u.status.payload_size = ZFCP_DBF_UNSOL_PAYLOAD_SENSE_DATA_AVAIL; break; case FSF_STATUS_READ_BIT_ERROR_THRESHOLD: - rec->type.status.payload_size = + rec->u.status.payload_size = ZFCP_DBF_UNSOL_PAYLOAD_BIT_ERROR_THRESHOLD; break; @@ -286,22 +283,21 @@ zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, switch (status_buffer->status_subtype) { case FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK: case FSF_STATUS_READ_SUB_FDISC_FAILED: - rec->type.status.payload_size = + rec->u.status.payload_size = sizeof(struct fsf_link_down_info); } break; case FSF_STATUS_READ_FEATURE_UPDATE_ALERT: - rec->type.status.payload_size = + rec->u.status.payload_size = ZFCP_DBF_UNSOL_PAYLOAD_FEATURE_UPDATE_ALERT; break; } - memcpy(&rec->type.status.payload, - &status_buffer->payload, rec->type.status.payload_size); + memcpy(&rec->u.status.payload, + &status_buffer->payload, rec->u.status.payload_size); } - debug_event(adapter->hba_dbf, 2, - rec, sizeof(struct zfcp_hba_dbf_record)); + debug_event(adapter->hba_dbf, 2, rec, sizeof(*rec)); spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } @@ -310,19 +306,18 @@ zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, unsigned int qdio_error, unsigned int siga_error, int sbal_index, int sbal_count) { - struct zfcp_hba_dbf_record *rec = &adapter->hba_dbf_buf; + struct zfcp_hba_dbf_record *r = &adapter->hba_dbf_buf; unsigned long flags; spin_lock_irqsave(&adapter->hba_dbf_lock, flags); - memset(rec, 0, sizeof(struct zfcp_hba_dbf_record)); - strncpy(rec->tag, "qdio", ZFCP_DBF_TAG_SIZE); - rec->type.qdio.status = status; - rec->type.qdio.qdio_error = qdio_error; - rec->type.qdio.siga_error = siga_error; - rec->type.qdio.sbal_index = sbal_index; - rec->type.qdio.sbal_count = sbal_count; - debug_event(adapter->hba_dbf, 0, - rec, sizeof(struct zfcp_hba_dbf_record)); + memset(r, 0, sizeof(*r)); + strncpy(r->tag, "qdio", ZFCP_DBF_TAG_SIZE); + r->u.qdio.status = status; + r->u.qdio.qdio_error = qdio_error; + r->u.qdio.siga_error = siga_error; + r->u.qdio.sbal_index = sbal_index; + r->u.qdio.sbal_count = sbal_count; + debug_event(adapter->hba_dbf, 0, r, sizeof(*r)); spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } @@ -352,34 +347,29 @@ static void zfcp_hba_dbf_view_response(char **p, case FSF_QTCB_FCP_CMND: if (r->fsf_req_status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) break; - zfcp_dbf_out(p, "scsi_cmnd", "0x%0Lx", - r->data.send_fcp.scsi_cmnd); - zfcp_dbf_out(p, "scsi_serial", "0x%016Lx", - r->data.send_fcp.scsi_serial); + zfcp_dbf_out(p, "scsi_cmnd", "0x%0Lx", r->u.fcp.cmnd); + zfcp_dbf_out(p, "scsi_serial", "0x%016Lx", r->u.fcp.serial); break; case FSF_QTCB_OPEN_PORT_WITH_DID: case FSF_QTCB_CLOSE_PORT: case FSF_QTCB_CLOSE_PHYSICAL_PORT: - zfcp_dbf_out(p, "wwpn", "0x%016Lx", r->data.port.wwpn); - zfcp_dbf_out(p, "d_id", "0x%06x", r->data.port.d_id); - zfcp_dbf_out(p, "port_handle", "0x%08x", - r->data.port.port_handle); + zfcp_dbf_out(p, "wwpn", "0x%016Lx", r->u.port.wwpn); + zfcp_dbf_out(p, "d_id", "0x%06x", r->u.port.d_id); + zfcp_dbf_out(p, "port_handle", "0x%08x", r->u.port.port_handle); break; case FSF_QTCB_OPEN_LUN: case FSF_QTCB_CLOSE_LUN: - zfcp_dbf_out(p, "wwpn", "0x%016Lx", r->data.unit.wwpn); - zfcp_dbf_out(p, "fcp_lun", "0x%016Lx", r->data.unit.fcp_lun); - zfcp_dbf_out(p, "port_handle", "0x%08x", - r->data.unit.port_handle); - zfcp_dbf_out(p, "lun_handle", "0x%08x", - r->data.unit.lun_handle); + zfcp_dbf_out(p, "wwpn", "0x%016Lx", r->u.unit.wwpn); + zfcp_dbf_out(p, "fcp_lun", "0x%016Lx", r->u.unit.fcp_lun); + zfcp_dbf_out(p, "port_handle", "0x%08x", r->u.unit.port_handle); + zfcp_dbf_out(p, "lun_handle", "0x%08x", r->u.unit.lun_handle); break; case FSF_QTCB_SEND_ELS: - zfcp_dbf_out(p, "d_id", "0x%06x", r->data.send_els.d_id); - zfcp_dbf_out(p, "ls_code", "0x%02x", r->data.send_els.ls_code); + zfcp_dbf_out(p, "d_id", "0x%06x", r->u.els.d_id); + zfcp_dbf_out(p, "ls_code", "0x%02x", r->u.els.ls_code); break; case FSF_QTCB_ABORT_FCP_CMND: @@ -428,11 +418,11 @@ static int zfcp_hba_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_tag(&p, "tag2", r->tag2); if (strncmp(r->tag, "resp", ZFCP_DBF_TAG_SIZE) == 0) - zfcp_hba_dbf_view_response(&p, &r->type.response); + zfcp_hba_dbf_view_response(&p, &r->u.response); else if (strncmp(r->tag, "stat", ZFCP_DBF_TAG_SIZE) == 0) - zfcp_hba_dbf_view_status(&p, &r->type.status); + zfcp_hba_dbf_view_status(&p, &r->u.status); else if (strncmp(r->tag, "qdio", ZFCP_DBF_TAG_SIZE) == 0) - zfcp_hba_dbf_view_qdio(&p, &r->type.qdio); + zfcp_hba_dbf_view_qdio(&p, &r->u.qdio); p += sprintf(p, "\n"); return p - out_buf; @@ -823,57 +813,34 @@ void zfcp_rec_dbf_event_action(u8 id2, struct zfcp_erp_action *erp_action) spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); } -static void -_zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, - u32 s_id, u32 d_id, void *buffer, int buflen) -{ - struct zfcp_send_ct *send_ct = (struct zfcp_send_ct *)fsf_req->data; - struct zfcp_port *port = send_ct->port; - struct zfcp_adapter *adapter = port->adapter; - struct ct_hdr *header = (struct ct_hdr *)buffer; - struct zfcp_san_dbf_record *rec = &adapter->san_dbf_buf; - struct zfcp_san_dbf_record_ct *ct = &rec->type.ct; - unsigned long flags; - - spin_lock_irqsave(&adapter->san_dbf_lock, flags); - memset(rec, 0, sizeof(struct zfcp_san_dbf_record)); - strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); - rec->fsf_reqid = (unsigned long)fsf_req; - rec->fsf_seqno = fsf_req->seq_no; - rec->s_id = s_id; - rec->d_id = d_id; - if (strncmp(tag, "octc", ZFCP_DBF_TAG_SIZE) == 0) { - ct->type.request.cmd_req_code = header->cmd_rsp_code; - ct->type.request.revision = header->revision; - ct->type.request.gs_type = header->gs_type; - ct->type.request.gs_subtype = header->gs_subtype; - ct->type.request.options = header->options; - ct->type.request.max_res_size = header->max_res_size; - } else if (strncmp(tag, "rctc", ZFCP_DBF_TAG_SIZE) == 0) { - ct->type.response.cmd_rsp_code = header->cmd_rsp_code; - ct->type.response.revision = header->revision; - ct->type.response.reason_code = header->reason_code; - ct->type.response.reason_code_expl = header->reason_code_expl; - ct->type.response.vendor_unique = header->vendor_unique; - } - ct->payload_size = - min(buflen - (int)sizeof(struct ct_hdr), ZFCP_DBF_CT_PAYLOAD); - memcpy(ct->payload, buffer + sizeof(struct ct_hdr), ct->payload_size); - debug_event(adapter->san_dbf, 3, - rec, sizeof(struct zfcp_san_dbf_record)); - spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); -} - void zfcp_san_dbf_event_ct_request(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_ct *ct = (struct zfcp_send_ct *)fsf_req->data; struct zfcp_port *port = ct->port; struct zfcp_adapter *adapter = port->adapter; + struct ct_hdr *hdr = zfcp_sg_to_address(ct->req); + struct zfcp_san_dbf_record *r = &adapter->san_dbf_buf; + struct zfcp_san_dbf_record_ct_request *oct = &r->u.ct_req; + unsigned long flags; - _zfcp_san_dbf_event_common_ct("octc", fsf_req, - fc_host_port_id(adapter->scsi_host), - port->d_id, zfcp_sg_to_address(ct->req), - ct->req->length); + spin_lock_irqsave(&adapter->san_dbf_lock, flags); + memset(r, 0, sizeof(*r)); + strncpy(r->tag, "octc", ZFCP_DBF_TAG_SIZE); + r->fsf_reqid = (unsigned long)fsf_req; + r->fsf_seqno = fsf_req->seq_no; + r->s_id = fc_host_port_id(adapter->scsi_host); + r->d_id = port->d_id; + oct->cmd_req_code = hdr->cmd_rsp_code; + oct->revision = hdr->revision; + oct->gs_type = hdr->gs_type; + oct->gs_subtype = hdr->gs_subtype; + oct->options = hdr->options; + oct->max_res_size = hdr->max_res_size; + oct->len = min((int)ct->req->length - (int)sizeof(struct ct_hdr), + ZFCP_DBF_CT_PAYLOAD); + memcpy(oct->payload, (void *)hdr + sizeof(struct ct_hdr), oct->len); + debug_event(adapter->san_dbf, 3, r, sizeof(*r)); + spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); } void zfcp_san_dbf_event_ct_response(struct zfcp_fsf_req *fsf_req) @@ -881,11 +848,28 @@ void zfcp_san_dbf_event_ct_response(struct zfcp_fsf_req *fsf_req) struct zfcp_send_ct *ct = (struct zfcp_send_ct *)fsf_req->data; struct zfcp_port *port = ct->port; struct zfcp_adapter *adapter = port->adapter; + struct ct_hdr *hdr = zfcp_sg_to_address(ct->resp); + struct zfcp_san_dbf_record *r = &adapter->san_dbf_buf; + struct zfcp_san_dbf_record_ct_response *rct = &r->u.ct_resp; + unsigned long flags; - _zfcp_san_dbf_event_common_ct("rctc", fsf_req, port->d_id, - fc_host_port_id(adapter->scsi_host), - zfcp_sg_to_address(ct->resp), - ct->resp->length); + spin_lock_irqsave(&adapter->san_dbf_lock, flags); + memset(r, 0, sizeof(*r)); + strncpy(r->tag, "rctc", ZFCP_DBF_TAG_SIZE); + r->fsf_reqid = (unsigned long)fsf_req; + r->fsf_seqno = fsf_req->seq_no; + r->s_id = port->d_id; + r->d_id = fc_host_port_id(adapter->scsi_host); + rct->cmd_rsp_code = hdr->cmd_rsp_code; + rct->revision = hdr->revision; + rct->reason_code = hdr->reason_code; + rct->expl = hdr->reason_code_expl; + rct->vendor_unique = hdr->vendor_unique; + rct->len = min((int)ct->resp->length - (int)sizeof(struct ct_hdr), + ZFCP_DBF_CT_PAYLOAD); + memcpy(rct->payload, (void *)hdr + sizeof(struct ct_hdr), rct->len); + debug_event(adapter->san_dbf, 3, r, sizeof(*r)); + spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); } static void @@ -898,13 +882,13 @@ _zfcp_san_dbf_event_common_els(const char *tag, int level, unsigned long flags; spin_lock_irqsave(&adapter->san_dbf_lock, flags); - memset(rec, 0, sizeof(struct zfcp_san_dbf_record)); + memset(rec, 0, sizeof(*rec)); strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); rec->fsf_reqid = (unsigned long)fsf_req; rec->fsf_seqno = fsf_req->seq_no; rec->s_id = s_id; rec->d_id = d_id; - rec->type.els.ls_code = ls_code; + rec->u.els.ls_code = ls_code; debug_event(adapter->san_dbf, level, rec, sizeof(*rec)); zfcp_dbf_hexdump(adapter->san_dbf, rec, sizeof(*rec), level, buffer, min(buflen, ZFCP_DBF_ELS_MAX_PAYLOAD)); @@ -967,42 +951,33 @@ zfcp_san_dbf_view_format(debug_info_t * id, struct debug_view *view, zfcp_dbf_out(&p, "d_id", "0x%06x", r->d_id); if (strncmp(r->tag, "octc", ZFCP_DBF_TAG_SIZE) == 0) { - /* FIXME: struct zfcp_dbf_ct_req *ct = ...; */ - zfcp_dbf_out(&p, "cmd_req_code", "0x%04x", - r->type.ct.type.request.cmd_req_code); - zfcp_dbf_out(&p, "revision", "0x%02x", - r->type.ct.type.request.revision); - zfcp_dbf_out(&p, "gs_type", "0x%02x", - r->type.ct.type.request.gs_type); - zfcp_dbf_out(&p, "gs_subtype", "0x%02x", - r->type.ct.type.request.gs_subtype); - zfcp_dbf_out(&p, "options", "0x%02x", - r->type.ct.type.request.options); - zfcp_dbf_out(&p, "max_res_size", "0x%04x", - r->type.ct.type.request.max_res_size); - total = r->type.ct.payload_size; - buffer = r->type.ct.payload; + struct zfcp_san_dbf_record_ct_request *ct = &r->u.ct_req; + zfcp_dbf_out(&p, "cmd_req_code", "0x%04x", ct->cmd_req_code); + zfcp_dbf_out(&p, "revision", "0x%02x", ct->revision); + zfcp_dbf_out(&p, "gs_type", "0x%02x", ct->gs_type); + zfcp_dbf_out(&p, "gs_subtype", "0x%02x", ct->gs_subtype); + zfcp_dbf_out(&p, "options", "0x%02x", ct->options); + zfcp_dbf_out(&p, "max_res_size", "0x%04x", ct->max_res_size); + total = ct->len; + buffer = ct->payload; buflen = min(total, ZFCP_DBF_CT_PAYLOAD); } else if (strncmp(r->tag, "rctc", ZFCP_DBF_TAG_SIZE) == 0) { - zfcp_dbf_out(&p, "cmd_rsp_code", "0x%04x", - r->type.ct.type.response.cmd_rsp_code); - zfcp_dbf_out(&p, "revision", "0x%02x", - r->type.ct.type.response.revision); - zfcp_dbf_out(&p, "reason_code", "0x%02x", - r->type.ct.type.response.reason_code); - zfcp_dbf_out(&p, "reason_code_expl", "0x%02x", - r->type.ct.type.response.reason_code_expl); - zfcp_dbf_out(&p, "vendor_unique", "0x%02x", - r->type.ct.type.response.vendor_unique); - total = r->type.ct.payload_size; - buffer = r->type.ct.payload; + struct zfcp_san_dbf_record_ct_response *ct = &r->u.ct_resp; + zfcp_dbf_out(&p, "cmd_rsp_code", "0x%04x", ct->cmd_rsp_code); + zfcp_dbf_out(&p, "revision", "0x%02x", ct->revision); + zfcp_dbf_out(&p, "reason_code", "0x%02x", ct->reason_code); + zfcp_dbf_out(&p, "reason_code_expl", "0x%02x", ct->expl); + zfcp_dbf_out(&p, "vendor_unique", "0x%02x", ct->vendor_unique); + total = ct->len; + buffer = ct->payload; buflen = min(total, ZFCP_DBF_CT_PAYLOAD); } else if (strncmp(r->tag, "oels", ZFCP_DBF_TAG_SIZE) == 0 || strncmp(r->tag, "rels", ZFCP_DBF_TAG_SIZE) == 0 || strncmp(r->tag, "iels", ZFCP_DBF_TAG_SIZE) == 0) { - zfcp_dbf_out(&p, "ls_code", "0x%02x", r->type.els.ls_code); - total = r->type.els.payload_size; - buffer = r->type.els.payload; + struct zfcp_san_dbf_record_els *els = &r->u.els; + zfcp_dbf_out(&p, "ls_code", "0x%02x", els->ls_code); + total = els->len; + buffer = els->payload; buflen = min(total, ZFCP_DBF_ELS_PAYLOAD); } @@ -1038,7 +1013,7 @@ _zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, spin_lock_irqsave(&adapter->scsi_dbf_lock, flags); do { - memset(rec, 0, sizeof(struct zfcp_scsi_dbf_record)); + memset(rec, 0, sizeof(*rec)); if (offset == 0) { strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); strncpy(rec->tag2, tag2, ZFCP_DBF_TAG_SIZE); @@ -1064,20 +1039,16 @@ _zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, fcp_sns_info = zfcp_get_fcp_sns_info_ptr(fcp_rsp); - rec->type.fcp.rsp_validity = - fcp_rsp->validity.value; - rec->type.fcp.rsp_scsi_status = - fcp_rsp->scsi_status; - rec->type.fcp.rsp_resid = fcp_rsp->fcp_resid; + rec->rsp_validity = fcp_rsp->validity.value; + rec->rsp_scsi_status = fcp_rsp->scsi_status; + rec->rsp_resid = fcp_rsp->fcp_resid; if (fcp_rsp->validity.bits.fcp_rsp_len_valid) - rec->type.fcp.rsp_code = - *(fcp_rsp_info + 3); + rec->rsp_code = *(fcp_rsp_info + 3); if (fcp_rsp->validity.bits.fcp_sns_len_valid) { buflen = min((int)fcp_rsp->fcp_sns_len, ZFCP_DBF_SCSI_MAX_FCP_SNS_INFO); - rec->type.fcp.sns_info_len = buflen; - memcpy(rec->type.fcp.sns_info, - fcp_sns_info, + rec->sns_info_len = buflen; + memcpy(rec->sns_info, fcp_sns_info, min(buflen, ZFCP_DBF_SCSI_FCP_SNS_INFO)); offset += min(buflen, @@ -1088,7 +1059,7 @@ _zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, rec->fsf_seqno = fsf_req->seq_no; rec->fsf_issued = fsf_req->issued; } - rec->type.old_fsf_reqid = old_req_id; + rec->old_fsf_reqid = old_req_id; } else { strncpy(dump->tag, "dump", ZFCP_DBF_TAG_SIZE); dump->total_size = buflen; @@ -1100,8 +1071,7 @@ _zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, memcpy(dump->data, fcp_sns_info + offset, dump->size); offset += dump->size; } - debug_event(adapter->scsi_dbf, level, - rec, sizeof(struct zfcp_scsi_dbf_record)); + debug_event(adapter->scsi_dbf, level, rec, sizeof(*rec)); } while (offset < buflen); spin_unlock_irqrestore(&adapter->scsi_dbf_lock, flags); } @@ -1159,28 +1129,23 @@ zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, zfcp_dbf_out(&p, "scsi_retries", "0x%02x", r->scsi_retries); zfcp_dbf_out(&p, "scsi_allowed", "0x%02x", r->scsi_allowed); if (strncmp(r->tag, "abrt", ZFCP_DBF_TAG_SIZE) == 0) - zfcp_dbf_out(&p, "old_fsf_reqid", "0x%0Lx", - r->type.old_fsf_reqid); + zfcp_dbf_out(&p, "old_fsf_reqid", "0x%0Lx", r->old_fsf_reqid); zfcp_dbf_out(&p, "fsf_reqid", "0x%0Lx", r->fsf_reqid); zfcp_dbf_out(&p, "fsf_seqno", "0x%08x", r->fsf_seqno); zfcp_dbf_timestamp(r->fsf_issued, &t); zfcp_dbf_out(&p, "fsf_issued", "%011lu:%06lu", t.tv_sec, t.tv_nsec); if (strncmp(r->tag, "rslt", ZFCP_DBF_TAG_SIZE) == 0) { - zfcp_dbf_out(&p, "fcp_rsp_validity", "0x%02x", - r->type.fcp.rsp_validity); - zfcp_dbf_out(&p, "fcp_rsp_scsi_status", - "0x%02x", r->type.fcp.rsp_scsi_status); - zfcp_dbf_out(&p, "fcp_rsp_resid", "0x%08x", - r->type.fcp.rsp_resid); - zfcp_dbf_out(&p, "fcp_rsp_code", "0x%08x", - r->type.fcp.rsp_code); - zfcp_dbf_out(&p, "fcp_sns_info_len", "0x%08x", - r->type.fcp.sns_info_len); - zfcp_dbf_outd(&p, "fcp_sns_info", r->type.fcp.sns_info, - min((int)r->type.fcp.sns_info_len, + zfcp_dbf_out(&p, "fcp_rsp_validity", "0x%02x", r->rsp_validity); + zfcp_dbf_out(&p, "fcp_rsp_scsi_status", "0x%02x", + r->rsp_scsi_status); + zfcp_dbf_out(&p, "fcp_rsp_resid", "0x%08x", r->rsp_resid); + zfcp_dbf_out(&p, "fcp_rsp_code", "0x%08x", r->rsp_code); + zfcp_dbf_out(&p, "fcp_sns_info_len", "0x%08x", r->sns_info_len); + zfcp_dbf_outd(&p, "fcp_sns_info", r->sns_info, + min((int)r->sns_info_len, ZFCP_DBF_SCSI_FCP_SNS_INFO), 0, - r->type.fcp.sns_info_len); + r->sns_info_len); } p += sprintf(p, "\n"); return p - out_buf; diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h index 5d88c01d5981..732a5ba1bea9 100644 --- a/drivers/s390/scsi/zfcp_dbf.h +++ b/drivers/s390/scsi/zfcp_dbf.h @@ -104,9 +104,9 @@ struct zfcp_hba_dbf_record_response { u64 erp_action; union { struct { - u64 scsi_cmnd; - u64 scsi_serial; - } send_fcp; + u64 cmnd; + u64 serial; + } fcp; struct { u64 wwpn; u32 d_id; @@ -121,8 +121,8 @@ struct zfcp_hba_dbf_record_response { struct { u32 d_id; u8 ls_code; - } send_els; - } data; + } els; + } u; } __attribute__ ((packed)); struct zfcp_hba_dbf_record_status { @@ -154,35 +154,34 @@ struct zfcp_hba_dbf_record { struct zfcp_hba_dbf_record_response response; struct zfcp_hba_dbf_record_status status; struct zfcp_hba_dbf_record_qdio qdio; - } type; + } u; } __attribute__ ((packed)); -struct zfcp_san_dbf_record_ct { - union { - struct { - u16 cmd_req_code; - u8 revision; - u8 gs_type; - u8 gs_subtype; - u8 options; - u16 max_res_size; - } request; - struct { - u16 cmd_rsp_code; - u8 revision; - u8 reason_code; - u8 reason_code_expl; - u8 vendor_unique; - } response; - } type; - u32 payload_size; +struct zfcp_san_dbf_record_ct_request { + u16 cmd_req_code; + u8 revision; + u8 gs_type; + u8 gs_subtype; + u8 options; + u16 max_res_size; + u32 len; #define ZFCP_DBF_CT_PAYLOAD 24 u8 payload[ZFCP_DBF_CT_PAYLOAD]; } __attribute__ ((packed)); +struct zfcp_san_dbf_record_ct_response { + u16 cmd_rsp_code; + u8 revision; + u8 reason_code; + u8 expl; + u8 vendor_unique; + u32 len; + u8 payload[ZFCP_DBF_CT_PAYLOAD]; +} __attribute__ ((packed)); + struct zfcp_san_dbf_record_els { u8 ls_code; - u32 payload_size; + u32 len; #define ZFCP_DBF_ELS_PAYLOAD 32 #define ZFCP_DBF_ELS_MAX_PAYLOAD 1024 u8 payload[ZFCP_DBF_ELS_PAYLOAD]; @@ -195,9 +194,10 @@ struct zfcp_san_dbf_record { u32 s_id; u32 d_id; union { - struct zfcp_san_dbf_record_ct ct; + struct zfcp_san_dbf_record_ct_request ct_req; + struct zfcp_san_dbf_record_ct_response ct_resp; struct zfcp_san_dbf_record_els els; - } type; + } u; } __attribute__ ((packed)); struct zfcp_scsi_dbf_record { @@ -215,19 +215,15 @@ struct zfcp_scsi_dbf_record { u64 fsf_reqid; u32 fsf_seqno; u64 fsf_issued; - union { - u64 old_fsf_reqid; - struct { - u8 rsp_validity; - u8 rsp_scsi_status; - u32 rsp_resid; - u8 rsp_code; + u64 old_fsf_reqid; + u8 rsp_validity; + u8 rsp_scsi_status; + u32 rsp_resid; + u8 rsp_code; #define ZFCP_DBF_SCSI_FCP_SNS_INFO 16 #define ZFCP_DBF_SCSI_MAX_FCP_SNS_INFO 256 - u32 sns_info_len; - u8 sns_info[ZFCP_DBF_SCSI_FCP_SNS_INFO]; - } fcp; - } type; + u32 sns_info_len; + u8 sns_info[ZFCP_DBF_SCSI_FCP_SNS_INFO]; } __attribute__ ((packed)); #endif /* ZFCP_DBF_H */ -- cgit v1.2.3-59-g8ed1b From 92c7a83fc1fe7b9c3b26831cf84aedd3962d13ee Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:30 +0200 Subject: [SCSI] zfcp: Cleanup line breaks in debug trace. Remove line breaks that do not conform to coding style. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 123 +++++++++++++++++++------------------------ 1 file changed, 55 insertions(+), 68 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index a39a3e33a5b6..7e85e87b0ede 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -110,9 +110,8 @@ static void zfcp_dbf_outd(char **p, const char *label, char *buffer, *p += sprintf(*p, "\n"); } -static int -zfcp_dbf_view_header(debug_info_t * id, struct debug_view *view, int area, - debug_entry_t * entry, char *out_buf) +static int zfcp_dbf_view_header(debug_info_t *id, struct debug_view *view, + int area, debug_entry_t *entry, char *out_buf) { struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)DEBUG_DATA(entry); struct timespec t; @@ -137,7 +136,7 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) struct zfcp_adapter *adapter = fsf_req->adapter; struct fsf_qtcb *qtcb = fsf_req->qtcb; union fsf_prot_status_qual *prot_status_qual = - &qtcb->prefix.prot_status_qual; + &qtcb->prefix.prot_status_qual; union fsf_status_qual *fsf_status_qual = &qtcb->header.fsf_status_qual; struct scsi_cmnd *scsi_cmnd; struct zfcp_port *port; @@ -248,9 +247,8 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } -void -zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, - struct fsf_status_read_buffer *status_buffer) +void zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, + struct fsf_status_read_buffer *status_buffer) { struct zfcp_hba_dbf_record *rec = &adapter->hba_dbf_buf; unsigned long flags; @@ -301,10 +299,9 @@ zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } -void -zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, - unsigned int qdio_error, unsigned int siga_error, - int sbal_index, int sbal_count) +void zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, + unsigned int qdio_error, unsigned int siga_error, + int sbal_index, int sbal_count) { struct zfcp_hba_dbf_record *r = &adapter->hba_dbf_buf; unsigned long flags; @@ -872,10 +869,10 @@ void zfcp_san_dbf_event_ct_response(struct zfcp_fsf_req *fsf_req) spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); } -static void -_zfcp_san_dbf_event_common_els(const char *tag, int level, - struct zfcp_fsf_req *fsf_req, u32 s_id, - u32 d_id, u8 ls_code, void *buffer, int buflen) +static void zfcp_san_dbf_event_els(const char *tag, int level, + struct zfcp_fsf_req *fsf_req, u32 s_id, + u32 d_id, u8 ls_code, void *buffer, + int buflen) { struct zfcp_adapter *adapter = fsf_req->adapter; struct zfcp_san_dbf_record *rec = &adapter->san_dbf_buf; @@ -899,42 +896,39 @@ void zfcp_san_dbf_event_els_request(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_els *els = (struct zfcp_send_els *)fsf_req->data; - _zfcp_san_dbf_event_common_els("oels", 2, fsf_req, - fc_host_port_id(els->adapter->scsi_host), - els->d_id, - *(u8 *) zfcp_sg_to_address(els->req), - zfcp_sg_to_address(els->req), - els->req->length); + zfcp_san_dbf_event_els("oels", 2, fsf_req, + fc_host_port_id(els->adapter->scsi_host), + els->d_id, *(u8 *) zfcp_sg_to_address(els->req), + zfcp_sg_to_address(els->req), els->req->length); } void zfcp_san_dbf_event_els_response(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_els *els = (struct zfcp_send_els *)fsf_req->data; - _zfcp_san_dbf_event_common_els("rels", 2, fsf_req, els->d_id, - fc_host_port_id(els->adapter->scsi_host), - *(u8 *) zfcp_sg_to_address(els->req), - zfcp_sg_to_address(els->resp), - els->resp->length); + zfcp_san_dbf_event_els("rels", 2, fsf_req, els->d_id, + fc_host_port_id(els->adapter->scsi_host), + *(u8 *)zfcp_sg_to_address(els->req), + zfcp_sg_to_address(els->resp), + els->resp->length); } void zfcp_san_dbf_event_incoming_els(struct zfcp_fsf_req *fsf_req) { struct zfcp_adapter *adapter = fsf_req->adapter; - struct fsf_status_read_buffer *status_buffer = - (struct fsf_status_read_buffer *)fsf_req->data; - int length = (int)status_buffer->length - - (int)((void *)&status_buffer->payload - (void *)status_buffer); - - _zfcp_san_dbf_event_common_els("iels", 1, fsf_req, status_buffer->d_id, - fc_host_port_id(adapter->scsi_host), - *(u8 *) status_buffer->payload, - (void *)status_buffer->payload, length); + struct fsf_status_read_buffer *buf = + (struct fsf_status_read_buffer *)fsf_req->data; + int length = (int)buf->length - + (int)((void *)&buf->payload - (void *)buf); + + zfcp_san_dbf_event_els("iels", 1, fsf_req, buf->d_id, + fc_host_port_id(adapter->scsi_host), + *(u8 *)buf->payload, (void *)buf->payload, + length); } -static int -zfcp_san_dbf_view_format(debug_info_t * id, struct debug_view *view, - char *out_buf, const char *in_buf) +static int zfcp_san_dbf_view_format(debug_info_t *id, struct debug_view *view, + char *out_buf, const char *in_buf) { struct zfcp_san_dbf_record *r = (struct zfcp_san_dbf_record *)in_buf; char *buffer = NULL; @@ -997,12 +991,11 @@ static struct debug_view zfcp_san_dbf_view = { NULL }; -static void -_zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, - struct zfcp_adapter *adapter, - struct scsi_cmnd *scsi_cmnd, - struct zfcp_fsf_req *fsf_req, - unsigned long old_req_id) +static void zfcp_scsi_dbf_event(const char *tag, const char *tag2, int level, + struct zfcp_adapter *adapter, + struct scsi_cmnd *scsi_cmnd, + struct zfcp_fsf_req *fsf_req, + unsigned long old_req_id) { struct zfcp_scsi_dbf_record *rec = &adapter->scsi_dbf_buf; struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)rec; @@ -1076,39 +1069,33 @@ _zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, spin_unlock_irqrestore(&adapter->scsi_dbf_lock, flags); } -void -zfcp_scsi_dbf_event_result(const char *tag, int level, - struct zfcp_adapter *adapter, - struct scsi_cmnd *scsi_cmnd, - struct zfcp_fsf_req *fsf_req) +void zfcp_scsi_dbf_event_result(const char *tag, int level, + struct zfcp_adapter *adapter, + struct scsi_cmnd *scsi_cmnd, + struct zfcp_fsf_req *fsf_req) { - _zfcp_scsi_dbf_event_common("rslt", tag, level, - adapter, scsi_cmnd, fsf_req, 0); + zfcp_scsi_dbf_event("rslt", tag, level, adapter, scsi_cmnd, fsf_req, 0); } -void -zfcp_scsi_dbf_event_abort(const char *tag, struct zfcp_adapter *adapter, - struct scsi_cmnd *scsi_cmnd, - struct zfcp_fsf_req *new_fsf_req, - unsigned long old_req_id) +void zfcp_scsi_dbf_event_abort(const char *tag, struct zfcp_adapter *adapter, + struct scsi_cmnd *scsi_cmnd, + struct zfcp_fsf_req *new_fsf_req, + unsigned long old_req_id) { - _zfcp_scsi_dbf_event_common("abrt", tag, 1, - adapter, scsi_cmnd, new_fsf_req, old_req_id); + zfcp_scsi_dbf_event("abrt", tag, 1, adapter, scsi_cmnd, new_fsf_req, + old_req_id); } -void -zfcp_scsi_dbf_event_devreset(const char *tag, u8 flag, struct zfcp_unit *unit, - struct scsi_cmnd *scsi_cmnd) +void zfcp_scsi_dbf_event_devreset(const char *tag, u8 flag, + struct zfcp_unit *unit, + struct scsi_cmnd *scsi_cmnd) { - struct zfcp_adapter *adapter = unit->port->adapter; - - _zfcp_scsi_dbf_event_common(flag == FCP_TARGET_RESET ? "trst" : "lrst", - tag, 1, adapter, scsi_cmnd, NULL, 0); + zfcp_scsi_dbf_event(flag == FCP_TARGET_RESET ? "trst" : "lrst", tag, 1, + unit->port->adapter, scsi_cmnd, NULL, 0); } -static int -zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, - char *out_buf, const char *in_buf) +static int zfcp_scsi_dbf_view_format(debug_info_t *id, struct debug_view *view, + char *out_buf, const char *in_buf) { struct zfcp_scsi_dbf_record *r = (struct zfcp_scsi_dbf_record *)in_buf; struct timespec t; -- cgit v1.2.3-59-g8ed1b From bfab1637b5d0c9683016917fa8e082ba6ce8d5a6 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Mon, 31 Mar 2008 11:15:31 +0200 Subject: [SCSI] zfcp: Add docbook comments to debug trace. Add missing docbook-comments for functions forming zfcp's internal trace API. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 62 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 7e85e87b0ede..85ba4cc4190e 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -131,6 +131,10 @@ static int zfcp_dbf_view_header(debug_info_t *id, struct debug_view *view, return p - out_buf; } +/** + * zfcp_hba_dbf_event_fsf_response - trace event for request completion + * @fsf_req: request that has been completed + */ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) { struct zfcp_adapter *adapter = fsf_req->adapter; @@ -247,6 +251,12 @@ void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } +/** + * zfcp_hba_dbf_event_fsf_unsol - trace event for an unsolicited status buffer + * @tag: tag indicating which kind of unsolicited status has been received + * @adapter: adapter that has issued the unsolicited status buffer + * @status_buffer: buffer containing payload of unsolicited status + */ void zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, struct fsf_status_read_buffer *status_buffer) { @@ -299,6 +309,15 @@ void zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); } +/** + * zfcp_hba_dbf_event_qdio - trace event for QDIO related failure + * @adapter: adapter affected by this QDIO related event + * @status: as passed by qdio module + * @qdio_error: as passed by qdio module + * @siga_error: as passed by qdio module + * @sbal_index: first buffer with error condition, as passed by qdio module + * @sbal_count: number of buffers affected, as passed by qdio module + */ void zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, unsigned int qdio_error, unsigned int siga_error, int sbal_index, int sbal_count) @@ -810,6 +829,10 @@ void zfcp_rec_dbf_event_action(u8 id2, struct zfcp_erp_action *erp_action) spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); } +/** + * zfcp_san_dbf_event_ct_request - trace event for issued CT request + * @fsf_req: request containing issued CT data + */ void zfcp_san_dbf_event_ct_request(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_ct *ct = (struct zfcp_send_ct *)fsf_req->data; @@ -840,6 +863,10 @@ void zfcp_san_dbf_event_ct_request(struct zfcp_fsf_req *fsf_req) spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); } +/** + * zfcp_san_dbf_event_ct_response - trace event for completion of CT request + * @fsf_req: request containing CT response + */ void zfcp_san_dbf_event_ct_response(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_ct *ct = (struct zfcp_send_ct *)fsf_req->data; @@ -892,6 +919,10 @@ static void zfcp_san_dbf_event_els(const char *tag, int level, spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); } +/** + * zfcp_san_dbf_event_els_request - trace event for issued ELS + * @fsf_req: request containing issued ELS + */ void zfcp_san_dbf_event_els_request(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_els *els = (struct zfcp_send_els *)fsf_req->data; @@ -902,6 +933,10 @@ void zfcp_san_dbf_event_els_request(struct zfcp_fsf_req *fsf_req) zfcp_sg_to_address(els->req), els->req->length); } +/** + * zfcp_san_dbf_event_els_response - trace event for completed ELS + * @fsf_req: request containing ELS response + */ void zfcp_san_dbf_event_els_response(struct zfcp_fsf_req *fsf_req) { struct zfcp_send_els *els = (struct zfcp_send_els *)fsf_req->data; @@ -913,6 +948,10 @@ void zfcp_san_dbf_event_els_response(struct zfcp_fsf_req *fsf_req) els->resp->length); } +/** + * zfcp_san_dbf_event_incoming_els - trace event for incomig ELS + * @fsf_req: request containing unsolicited status buffer with incoming ELS + */ void zfcp_san_dbf_event_incoming_els(struct zfcp_fsf_req *fsf_req) { struct zfcp_adapter *adapter = fsf_req->adapter; @@ -1069,6 +1108,14 @@ static void zfcp_scsi_dbf_event(const char *tag, const char *tag2, int level, spin_unlock_irqrestore(&adapter->scsi_dbf_lock, flags); } +/** + * zfcp_scsi_dbf_event_result - trace event for SCSI command completion + * @tag: tag indicating success or failure of SCSI command + * @level: trace level applicable for this event + * @adapter: adapter that has been used to issue the SCSI command + * @scsi_cmnd: SCSI command pointer + * @fsf_req: request used to issue SCSI command (might be NULL) + */ void zfcp_scsi_dbf_event_result(const char *tag, int level, struct zfcp_adapter *adapter, struct scsi_cmnd *scsi_cmnd, @@ -1077,6 +1124,14 @@ void zfcp_scsi_dbf_event_result(const char *tag, int level, zfcp_scsi_dbf_event("rslt", tag, level, adapter, scsi_cmnd, fsf_req, 0); } +/** + * zfcp_scsi_dbf_event_abort - trace event for SCSI command abort + * @tag: tag indicating success or failure of abort operation + * @adapter: adapter thas has been used to issue SCSI command to be aborted + * @scsi_cmnd: SCSI command to be aborted + * @new_fsf_req: request containing abort (might be NULL) + * @old_req_id: identifier of request containg SCSI command to be aborted + */ void zfcp_scsi_dbf_event_abort(const char *tag, struct zfcp_adapter *adapter, struct scsi_cmnd *scsi_cmnd, struct zfcp_fsf_req *new_fsf_req, @@ -1086,6 +1141,13 @@ void zfcp_scsi_dbf_event_abort(const char *tag, struct zfcp_adapter *adapter, old_req_id); } +/** + * zfcp_scsi_dbf_event_devreset - trace event for Logical Unit or Target Reset + * @tag: tag indicating success or failure of reset operation + * @flag: indicates type of reset (Target Reset, Logical Unit Reset) + * @unit: unit that needs reset + * @scsi_cmnd: SCSI command which caused this error recovery + */ void zfcp_scsi_dbf_event_devreset(const char *tag, u8 flag, struct zfcp_unit *unit, struct scsi_cmnd *scsi_cmnd) -- cgit v1.2.3-59-g8ed1b From 2d921c321ca670201abe9eff5e53585c39e68f5e Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Tue, 1 Apr 2008 10:26:53 +0200 Subject: qeth: improve ip_list administration after deregister failures 1. ip_list handling after deregister failure of multicast address: If error code "MC Address not found" is returned do not re-add multicast address to ip_list. For other error codes readd multicast address at the end of function qeth_delete_all_mc. 2. ip_list handling after deregister failure or normal ip address: If error code "IP Address not found" is returned do not re-add multicast address to ip list. This is especially important in IP address takeover scenarios, to enable re-takeover of a taken over IP address. Signed-off-by: Ursula Braun Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_core_mpc.c | 2 +- drivers/s390/net/qeth_core_mpc.h | 2 +- drivers/s390/net/qeth_l3_main.c | 14 +++++++++----- 3 files changed, 11 insertions(+), 7 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_core_mpc.c b/drivers/s390/net/qeth_core_mpc.c index 8653b73e5dcf..441533f2062e 100644 --- a/drivers/s390/net/qeth_core_mpc.c +++ b/drivers/s390/net/qeth_core_mpc.c @@ -195,7 +195,7 @@ static struct ipa_rc_msg qeth_ipa_rc_msg[] = { {IPA_RC_SETIP_NO_STARTLAN, "Setip no startlan received"}, {IPA_RC_SETIP_ALREADY_RECEIVED, "Setip already received"}, {IPA_RC_IP_ADDR_ALREADY_USED, "IP address already in use on LAN"}, - {IPA_RC_MULTICAST_FULL, "No task available, multicast full"}, + {IPA_RC_MC_ADDR_NOT_FOUND, "Multicast address not found"}, {IPA_RC_SETIP_INVALID_VERSION, "SETIP invalid IP version"}, {IPA_RC_UNSUPPORTED_SUBCMD, "Unsupported assist subcommand"}, {IPA_RC_ARP_ASSIST_NO_ENABLE, "Only partial success, no enable"}, diff --git a/drivers/s390/net/qeth_core_mpc.h b/drivers/s390/net/qeth_core_mpc.h index de221932f30f..18548822e37c 100644 --- a/drivers/s390/net/qeth_core_mpc.h +++ b/drivers/s390/net/qeth_core_mpc.h @@ -182,7 +182,7 @@ enum qeth_ipa_return_codes { IPA_RC_SETIP_NO_STARTLAN = 0xe008, IPA_RC_SETIP_ALREADY_RECEIVED = 0xe009, IPA_RC_IP_ADDR_ALREADY_USED = 0xe00a, - IPA_RC_MULTICAST_FULL = 0xe00b, + IPA_RC_MC_ADDR_NOT_FOUND = 0xe00b, IPA_RC_SETIP_INVALID_VERSION = 0xe00d, IPA_RC_UNSUPPORTED_SUBCMD = 0xe00e, IPA_RC_ARP_ASSIST_NO_ENABLE = 0xe00f, diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index 21c439046b3c..1013a2a8ec00 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -401,8 +401,11 @@ static int __qeth_l3_ref_ip_on_card(struct qeth_card *card, static void __qeth_l3_delete_all_mc(struct qeth_card *card, unsigned long *flags) { + struct list_head fail_list; struct qeth_ipaddr *addr, *tmp; int rc; + + INIT_LIST_HEAD(&fail_list); again: list_for_each_entry_safe(addr, tmp, &card->ip_list, entry) { if (addr->is_multicast) { @@ -410,13 +413,14 @@ again: spin_unlock_irqrestore(&card->ip_lock, *flags); rc = qeth_l3_deregister_addr_entry(card, addr); spin_lock_irqsave(&card->ip_lock, *flags); - if (!rc) { + if (!rc || (rc == IPA_RC_MC_ADDR_NOT_FOUND)) kfree(addr); - goto again; - } else - list_add(&addr->entry, &card->ip_list); + else + list_add_tail(&addr->entry, &fail_list); + goto again; } } + list_splice(&fail_list, &card->ip_list); } static void qeth_l3_set_ip_addr_list(struct qeth_card *card) @@ -467,7 +471,7 @@ static void qeth_l3_set_ip_addr_list(struct qeth_card *card) spin_unlock_irqrestore(&card->ip_lock, flags); rc = qeth_l3_deregister_addr_entry(card, addr); spin_lock_irqsave(&card->ip_lock, flags); - if (!rc) + if (!rc || (rc == IPA_RC_PRIMARY_ALREADY_DEFINED)) kfree(addr); else list_add_tail(&addr->entry, &card->ip_list); -- cgit v1.2.3-59-g8ed1b From 508b3c4f71dc348f8b68f1b4ea3aa0d115f0199d Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Tue, 1 Apr 2008 10:26:54 +0200 Subject: qeth: allow qdio queue element addresses > 2GB OSA-adapters do not have an address limitation for the qdio queue structures except the MAX storage level of the current processor. And due to a recent z/VM APAR there is no longer a restriction to allocate qdio structures below 2 GB. Signed-off-by: Ursula Braun Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_core_main.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 95c6fcf58953..86bcaf0e9957 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -241,7 +241,7 @@ static int qeth_alloc_buffer_pool(struct qeth_card *card) return -ENOMEM; } for (j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j) { - ptr = (void *) __get_free_page(GFP_KERNEL|GFP_DMA); + ptr = (void *) __get_free_page(GFP_KERNEL); if (!ptr) { while (j > 0) free_page((unsigned long) @@ -2000,7 +2000,7 @@ static int qeth_alloc_qdio_buffers(struct qeth_card *card) return 0; card->qdio.in_q = kmalloc(sizeof(struct qeth_qdio_q), - GFP_KERNEL|GFP_DMA); + GFP_KERNEL); if (!card->qdio.in_q) goto out_nomem; QETH_DBF_TEXT(setup, 2, "inq"); @@ -2021,7 +2021,7 @@ static int qeth_alloc_qdio_buffers(struct qeth_card *card) goto out_freepool; for (i = 0; i < card->qdio.no_out_queues; ++i) { card->qdio.out_qs[i] = kmalloc(sizeof(struct qeth_qdio_out_q), - GFP_KERNEL|GFP_DMA); + GFP_KERNEL); if (!card->qdio.out_qs[i]) goto out_freeoutq; QETH_DBF_TEXT_(setup, 2, "outq %i", i); @@ -2308,7 +2308,7 @@ static inline struct qeth_buffer_pool_entry *qeth_find_free_buffer_pool_entry( struct qeth_buffer_pool_entry, list); for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { if (page_count(virt_to_page(entry->elements[i])) > 1) { - page = alloc_page(GFP_ATOMIC|GFP_DMA); + page = alloc_page(GFP_ATOMIC); if (!page) { return NULL; } else { -- cgit v1.2.3-59-g8ed1b From 922dc0624ea02905e33a7fe1440f8cd157f9a4e5 Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Tue, 1 Apr 2008 10:26:55 +0200 Subject: qeth: set lan_online flag after a received STARTLAN Problem: A STARTLAN command from the adapter may arrive while a qeth recovery is currently running with a failed qeth STARTLAN. Usually qeth schedules a recovery when receiving a STARTLAN command from the adapter. But another recovery scheduled while a recovery is already running never starts. Thus the qeth-administered lan_online flag remains zero in this scenario, even though the adapter-STARTLAN has happened. Solution: Set lan_online flag for a received STARTLAN from the adapter in case scheduled recovery does not start. Signed-off-by: Ursula Braun Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_core_main.c | 1 + 1 file changed, 1 insertion(+) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 86bcaf0e9957..ce27c0f3c4d3 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -417,6 +417,7 @@ static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, QETH_CARD_IFNAME(card), card->info.chpid); netif_carrier_on(card->dev); + card->lan_online = 1; qeth_schedule_recovery(card); return NULL; case IPA_CMD_MODCCID: -- cgit v1.2.3-59-g8ed1b From 128837259912087101cd336226abc7ee3e8555b5 Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Tue, 1 Apr 2008 10:26:56 +0200 Subject: qeth: CCL-sequence numbers required for protocol ETH_P_802_2 only Symptom: slow CCL response time Problem: non-ETH_P_802_2 packets are not delivered to NDH for CCL. But CCL detects missing sequence numbers, which cause a serious performance problem with CCL. Solution: assign sequence numbers only to 802.2 packets. Signed-off-by: Ursula Braun Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_l2_main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 4417a3629ae0..0263d9406fcf 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -451,7 +451,8 @@ static void qeth_l2_process_inbound_buffer(struct qeth_card *card, skb->ip_summed = CHECKSUM_UNNECESSARY; else skb->ip_summed = CHECKSUM_NONE; - *((__u32 *)skb->cb) = ++card->seqno.pkt_seqno; + if (skb->protocol == htons(ETH_P_802_2)) + *((__u32 *)skb->cb) = ++card->seqno.pkt_seqno; len = skb->len; netif_rx(skb); break; -- cgit v1.2.3-59-g8ed1b From b7624ec1cfaa1218320faa00a061b9891ed28997 Mon Sep 17 00:00:00 2001 From: Frank Blaschka Date: Tue, 1 Apr 2008 10:26:57 +0200 Subject: qeth: layer 3 do not allow to change mac address hw does not allow to change the mac address. Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_l3_main.c | 1 + 1 file changed, 1 insertion(+) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index 1013a2a8ec00..e95220a2638d 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -2961,6 +2961,7 @@ static int qeth_l3_setup_netdev(struct qeth_card *card) card->dev->vlan_rx_add_vid = qeth_l3_vlan_rx_add_vid; card->dev->vlan_rx_kill_vid = qeth_l3_vlan_rx_kill_vid; card->dev->mtu = card->info.initial_mtu; + card->dev->set_mac_address = NULL; SET_ETHTOOL_OPS(card->dev, &qeth_l3_ethtool_ops); card->dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX | -- cgit v1.2.3-59-g8ed1b From d11ba0c40fa8a21511822efee3be8389f94f0431 Mon Sep 17 00:00:00 2001 From: Peter Tiedemann Date: Tue, 1 Apr 2008 10:26:58 +0200 Subject: qeth: improving debug message handling Improving debug message handling, moving ipa into messages from kernel to dbf, some cleanups and typo fixes. Signed-off-by: Peter Tiedemann Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_core.h | 93 +++--- drivers/s390/net/qeth_core_main.c | 579 +++++++++++++++++--------------------- drivers/s390/net/qeth_core_mpc.c | 2 +- drivers/s390/net/qeth_core_offl.c | 60 ++-- drivers/s390/net/qeth_l2_main.c | 143 +++++----- drivers/s390/net/qeth_l3.h | 11 +- drivers/s390/net/qeth_l3_main.c | 282 +++++++++---------- 7 files changed, 544 insertions(+), 626 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 9485e363ca11..66f4f12503c9 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -34,59 +34,53 @@ #include "qeth_core_mpc.h" +#define KMSG_COMPONENT "qeth" + /** * Debug Facility stuff */ -#define QETH_DBF_SETUP_NAME "qeth_setup" -#define QETH_DBF_SETUP_LEN 8 -#define QETH_DBF_SETUP_PAGES 8 -#define QETH_DBF_SETUP_NR_AREAS 1 -#define QETH_DBF_SETUP_LEVEL 5 - -#define QETH_DBF_MISC_NAME "qeth_misc" -#define QETH_DBF_MISC_LEN 128 -#define QETH_DBF_MISC_PAGES 2 -#define QETH_DBF_MISC_NR_AREAS 1 -#define QETH_DBF_MISC_LEVEL 2 - -#define QETH_DBF_DATA_NAME "qeth_data" -#define QETH_DBF_DATA_LEN 96 -#define QETH_DBF_DATA_PAGES 8 -#define QETH_DBF_DATA_NR_AREAS 1 -#define QETH_DBF_DATA_LEVEL 2 - -#define QETH_DBF_CONTROL_NAME "qeth_control" -#define QETH_DBF_CONTROL_LEN 256 -#define QETH_DBF_CONTROL_PAGES 8 -#define QETH_DBF_CONTROL_NR_AREAS 1 -#define QETH_DBF_CONTROL_LEVEL 5 - -#define QETH_DBF_TRACE_NAME "qeth_trace" -#define QETH_DBF_TRACE_LEN 8 -#define QETH_DBF_TRACE_PAGES 4 -#define QETH_DBF_TRACE_NR_AREAS 1 -#define QETH_DBF_TRACE_LEVEL 3 - -#define QETH_DBF_SENSE_NAME "qeth_sense" -#define QETH_DBF_SENSE_LEN 64 -#define QETH_DBF_SENSE_PAGES 2 -#define QETH_DBF_SENSE_NR_AREAS 1 -#define QETH_DBF_SENSE_LEVEL 2 - -#define QETH_DBF_QERR_NAME "qeth_qerr" -#define QETH_DBF_QERR_LEN 8 -#define QETH_DBF_QERR_PAGES 2 -#define QETH_DBF_QERR_NR_AREAS 1 -#define QETH_DBF_QERR_LEVEL 2 +enum qeth_dbf_names { + QETH_DBF_SETUP, + QETH_DBF_QERR, + QETH_DBF_TRACE, + QETH_DBF_MSG, + QETH_DBF_SENSE, + QETH_DBF_MISC, + QETH_DBF_CTRL, + QETH_DBF_INFOS /* must be last element */ +}; + +struct qeth_dbf_info { + char name[DEBUG_MAX_NAME_LEN]; + int pages; + int areas; + int len; + int level; + struct debug_view *view; + debug_info_t *id; +}; + +#define QETH_DBF_CTRL_LEN 256 #define QETH_DBF_TEXT(name, level, text) \ - do { \ - debug_text_event(qeth_dbf_##name, level, text); \ - } while (0) + debug_text_event(qeth_dbf[QETH_DBF_##name].id, level, text) #define QETH_DBF_HEX(name, level, addr, len) \ + debug_event(qeth_dbf[QETH_DBF_##name].id, level, (void *)(addr), len) + +#define QETH_DBF_MESSAGE(level, text...) \ + debug_sprintf_event(qeth_dbf[QETH_DBF_MSG].id, level, text) + +#define QETH_DBF_TEXT_(name, level, text...) \ do { \ - debug_event(qeth_dbf_##name, level, (void *)(addr), len); \ + if (qeth_dbf_passes(qeth_dbf[QETH_DBF_##name].id, level)) { \ + char *dbf_txt_buf = \ + get_cpu_var(QETH_DBF_TXT_BUF); \ + sprintf(dbf_txt_buf, text); \ + debug_text_event(qeth_dbf[QETH_DBF_##name].id, \ + level, dbf_txt_buf); \ + put_cpu_var(QETH_DBF_TXT_BUF); \ + } \ } while (0) /* Allow to sort out low debug levels early to avoid wasted sprints */ @@ -826,13 +820,8 @@ void qeth_core_remove_osn_attributes(struct device *); /* exports for qeth discipline device drivers */ extern struct qeth_card_list_struct qeth_core_card_list; -extern debug_info_t *qeth_dbf_setup; -extern debug_info_t *qeth_dbf_data; -extern debug_info_t *qeth_dbf_misc; -extern debug_info_t *qeth_dbf_control; -extern debug_info_t *qeth_dbf_trace; -extern debug_info_t *qeth_dbf_sense; -extern debug_info_t *qeth_dbf_qerr; + +extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS]; void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int); int qeth_threads_running(struct qeth_card *, unsigned long); diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index ce27c0f3c4d3..5cfe0ef7719a 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -26,38 +26,35 @@ #include "qeth_core.h" #include "qeth_core_offl.h" -#define QETH_DBF_TEXT_(name, level, text...) \ - do { \ - if (qeth_dbf_passes(qeth_dbf_##name, level)) { \ - char *dbf_txt_buf = \ - get_cpu_var(qeth_core_dbf_txt_buf); \ - sprintf(dbf_txt_buf, text); \ - debug_text_event(qeth_dbf_##name, level, dbf_txt_buf); \ - put_cpu_var(qeth_core_dbf_txt_buf); \ - } \ - } while (0) +static DEFINE_PER_CPU(char[256], qeth_core_dbf_txt_buf); +#define QETH_DBF_TXT_BUF qeth_core_dbf_txt_buf + +struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS] = { + /* define dbf - Name, Pages, Areas, Maxlen, Level, View, Handle */ + /* N P A M L V H */ + [QETH_DBF_SETUP] = {"qeth_setup", + 8, 1, 8, 5, &debug_hex_ascii_view, NULL}, + [QETH_DBF_QERR] = {"qeth_qerr", + 2, 1, 8, 2, &debug_hex_ascii_view, NULL}, + [QETH_DBF_TRACE] = {"qeth_trace", + 4, 1, 8, 3, &debug_hex_ascii_view, NULL}, + [QETH_DBF_MSG] = {"qeth_msg", + 8, 1, 128, 3, &debug_sprintf_view, NULL}, + [QETH_DBF_SENSE] = {"qeth_sense", + 2, 1, 64, 2, &debug_hex_ascii_view, NULL}, + [QETH_DBF_MISC] = {"qeth_misc", + 2, 1, 256, 2, &debug_hex_ascii_view, NULL}, + [QETH_DBF_CTRL] = {"qeth_control", + 8, 1, QETH_DBF_CTRL_LEN, 5, &debug_hex_ascii_view, NULL}, +}; +EXPORT_SYMBOL_GPL(qeth_dbf); struct qeth_card_list_struct qeth_core_card_list; EXPORT_SYMBOL_GPL(qeth_core_card_list); -debug_info_t *qeth_dbf_setup; -EXPORT_SYMBOL_GPL(qeth_dbf_setup); -debug_info_t *qeth_dbf_data; -EXPORT_SYMBOL_GPL(qeth_dbf_data); -debug_info_t *qeth_dbf_misc; -EXPORT_SYMBOL_GPL(qeth_dbf_misc); -debug_info_t *qeth_dbf_control; -EXPORT_SYMBOL_GPL(qeth_dbf_control); -debug_info_t *qeth_dbf_trace; -EXPORT_SYMBOL_GPL(qeth_dbf_trace); -debug_info_t *qeth_dbf_sense; -EXPORT_SYMBOL_GPL(qeth_dbf_sense); -debug_info_t *qeth_dbf_qerr; -EXPORT_SYMBOL_GPL(qeth_dbf_qerr); static struct device *qeth_core_root_dev; static unsigned int known_devices[][10] = QETH_MODELLIST_ARRAY; static struct lock_class_key qdio_out_skb_queue_key; -static DEFINE_PER_CPU(char[256], qeth_core_dbf_txt_buf); static void qeth_send_control_data_cb(struct qeth_channel *, struct qeth_cmd_buffer *); @@ -219,7 +216,7 @@ void qeth_clear_working_pool_list(struct qeth_card *card) { struct qeth_buffer_pool_entry *pool_entry, *tmp; - QETH_DBF_TEXT(trace, 5, "clwrklst"); + QETH_DBF_TEXT(TRACE, 5, "clwrklst"); list_for_each_entry_safe(pool_entry, tmp, &card->qdio.in_buf_pool.entry_list, list){ list_del(&pool_entry->list); @@ -233,7 +230,7 @@ static int qeth_alloc_buffer_pool(struct qeth_card *card) void *ptr; int i, j; - QETH_DBF_TEXT(trace, 5, "alocpool"); + QETH_DBF_TEXT(TRACE, 5, "alocpool"); for (i = 0; i < card->qdio.init_pool.buf_count; ++i) { pool_entry = kmalloc(sizeof(*pool_entry), GFP_KERNEL); if (!pool_entry) { @@ -260,7 +257,7 @@ static int qeth_alloc_buffer_pool(struct qeth_card *card) int qeth_realloc_buffer_pool(struct qeth_card *card, int bufcnt) { - QETH_DBF_TEXT(trace, 2, "realcbp"); + QETH_DBF_TEXT(TRACE, 2, "realcbp"); if ((card->state != CARD_STATE_DOWN) && (card->state != CARD_STATE_RECOVER)) @@ -321,7 +318,7 @@ static int qeth_issue_next_read(struct qeth_card *card) int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(trace, 5, "issnxrd"); + QETH_DBF_TEXT(TRACE, 5, "issnxrd"); if (card->read.state != CH_STATE_UP) return -EIO; iob = qeth_get_buffer(&card->read); @@ -330,7 +327,7 @@ static int qeth_issue_next_read(struct qeth_card *card) return -ENOMEM; } qeth_setup_ccw(&card->read, iob->data, QETH_BUFSIZE); - QETH_DBF_TEXT(trace, 6, "noirqpnd"); + QETH_DBF_TEXT(TRACE, 6, "noirqpnd"); rc = ccw_device_start(card->read.ccwdev, &card->read.ccw, (addr_t) iob, 0, 0); if (rc) { @@ -368,19 +365,19 @@ static void qeth_put_reply(struct qeth_reply *reply) kfree(reply); } -static void qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, +static void qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, int rc, struct qeth_card *card) { - int rc; - int com; char *ipa_name; - - com = cmd->hdr.command; - rc = cmd->hdr.return_code; + int com = cmd->hdr.command; ipa_name = qeth_get_ipa_cmd_name(com); - - PRINT_ERR("%s(x%X) for %s returned x%X \"%s\"\n", ipa_name, com, - QETH_CARD_IFNAME(card), rc, qeth_get_ipa_msg(rc)); + if (rc) + QETH_DBF_MESSAGE(2, "IPA: %s(x%X) for %s returned x%X \"%s\"\n", + ipa_name, com, QETH_CARD_IFNAME(card), + rc, qeth_get_ipa_msg(rc)); + else + QETH_DBF_MESSAGE(5, "IPA: %s(x%X) for %s succeeded\n", + ipa_name, com, QETH_CARD_IFNAME(card)); } static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, @@ -388,14 +385,14 @@ static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, { struct qeth_ipa_cmd *cmd = NULL; - QETH_DBF_TEXT(trace, 5, "chkipad"); + QETH_DBF_TEXT(TRACE, 5, "chkipad"); if (IS_IPA(iob->data)) { cmd = (struct qeth_ipa_cmd *) PDU_ENCAPSULATION(iob->data); if (IS_IPA_REPLY(cmd)) { - if (cmd->hdr.return_code && - (cmd->hdr.command < IPA_CMD_SETCCID || - cmd->hdr.command > IPA_CMD_MODCCID)) - qeth_issue_ipa_msg(cmd, card); + if (cmd->hdr.command < IPA_CMD_SETCCID || + cmd->hdr.command > IPA_CMD_MODCCID) + qeth_issue_ipa_msg(cmd, + cmd->hdr.return_code, card); return cmd; } else { switch (cmd->hdr.command) { @@ -423,10 +420,10 @@ static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, case IPA_CMD_MODCCID: return cmd; case IPA_CMD_REGISTER_LOCAL_ADDR: - QETH_DBF_TEXT(trace, 3, "irla"); + QETH_DBF_TEXT(TRACE, 3, "irla"); break; case IPA_CMD_UNREGISTER_LOCAL_ADDR: - QETH_DBF_TEXT(trace, 3, "urla"); + QETH_DBF_TEXT(TRACE, 3, "urla"); break; default: PRINT_WARN("Received data is IPA " @@ -443,7 +440,7 @@ void qeth_clear_ipacmd_list(struct qeth_card *card) struct qeth_reply *reply, *r; unsigned long flags; - QETH_DBF_TEXT(trace, 4, "clipalst"); + QETH_DBF_TEXT(TRACE, 4, "clipalst"); spin_lock_irqsave(&card->lock, flags); list_for_each_entry_safe(reply, r, &card->cmd_waiter_list, list) { @@ -463,16 +460,16 @@ static int qeth_check_idx_response(unsigned char *buffer) if (!buffer) return 0; - QETH_DBF_HEX(control, 2, buffer, QETH_DBF_CONTROL_LEN); + QETH_DBF_HEX(CTRL, 2, buffer, QETH_DBF_CTRL_LEN); if ((buffer[2] & 0xc0) == 0xc0) { PRINT_WARN("received an IDX TERMINATE " "with cause code 0x%02x%s\n", buffer[4], ((buffer[4] == 0x22) ? " -- try another portname" : "")); - QETH_DBF_TEXT(trace, 2, "ckidxres"); - QETH_DBF_TEXT(trace, 2, " idxterm"); - QETH_DBF_TEXT_(trace, 2, " rc%d", -EIO); + QETH_DBF_TEXT(TRACE, 2, "ckidxres"); + QETH_DBF_TEXT(TRACE, 2, " idxterm"); + QETH_DBF_TEXT_(TRACE, 2, " rc%d", -EIO); return -EIO; } return 0; @@ -483,7 +480,7 @@ static void qeth_setup_ccw(struct qeth_channel *channel, unsigned char *iob, { struct qeth_card *card; - QETH_DBF_TEXT(trace, 4, "setupccw"); + QETH_DBF_TEXT(TRACE, 4, "setupccw"); card = CARD_FROM_CDEV(channel->ccwdev); if (channel == &card->read) memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1)); @@ -497,7 +494,7 @@ static struct qeth_cmd_buffer *__qeth_get_buffer(struct qeth_channel *channel) { __u8 index; - QETH_DBF_TEXT(trace, 6, "getbuff"); + QETH_DBF_TEXT(TRACE, 6, "getbuff"); index = channel->io_buf_no; do { if (channel->iob[index].state == BUF_STATE_FREE) { @@ -518,7 +515,7 @@ void qeth_release_buffer(struct qeth_channel *channel, { unsigned long flags; - QETH_DBF_TEXT(trace, 6, "relbuff"); + QETH_DBF_TEXT(TRACE, 6, "relbuff"); spin_lock_irqsave(&channel->iob_lock, flags); memset(iob->data, 0, QETH_BUFSIZE); iob->state = BUF_STATE_FREE; @@ -568,7 +565,7 @@ static void qeth_send_control_data_cb(struct qeth_channel *channel, unsigned long flags; int keep_reply; - QETH_DBF_TEXT(trace, 4, "sndctlcb"); + QETH_DBF_TEXT(TRACE, 4, "sndctlcb"); card = CARD_FROM_CDEV(channel->ccwdev); if (qeth_check_idx_response(iob->data)) { @@ -638,7 +635,7 @@ static int qeth_setup_channel(struct qeth_channel *channel) { int cnt; - QETH_DBF_TEXT(setup, 2, "setupch"); + QETH_DBF_TEXT(SETUP, 2, "setupch"); for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) { channel->iob[cnt].data = (char *) kmalloc(QETH_BUFSIZE, GFP_DMA|GFP_KERNEL); @@ -732,7 +729,7 @@ EXPORT_SYMBOL_GPL(qeth_do_run_thread); void qeth_schedule_recovery(struct qeth_card *card) { - QETH_DBF_TEXT(trace, 2, "startrec"); + QETH_DBF_TEXT(TRACE, 2, "startrec"); if (qeth_set_thread_start_bit(card, QETH_RECOVER_THREAD) == 0) schedule_work(&card->kernel_thread_starter); } @@ -750,7 +747,7 @@ static int qeth_get_problem(struct ccw_device *cdev, struct irb *irb) if (cstat & (SCHN_STAT_CHN_CTRL_CHK | SCHN_STAT_INTF_CTRL_CHK | SCHN_STAT_CHN_DATA_CHK | SCHN_STAT_CHAIN_CHECK | SCHN_STAT_PROT_CHECK | SCHN_STAT_PROG_CHECK)) { - QETH_DBF_TEXT(trace, 2, "CGENCHK"); + QETH_DBF_TEXT(TRACE, 2, "CGENCHK"); PRINT_WARN("check on device %s, dstat=x%x, cstat=x%x ", cdev->dev.bus_id, dstat, cstat); print_hex_dump(KERN_WARNING, "qeth: irb ", DUMP_PREFIX_OFFSET, @@ -761,23 +758,23 @@ static int qeth_get_problem(struct ccw_device *cdev, struct irb *irb) if (dstat & DEV_STAT_UNIT_CHECK) { if (sense[SENSE_RESETTING_EVENT_BYTE] & SENSE_RESETTING_EVENT_FLAG) { - QETH_DBF_TEXT(trace, 2, "REVIND"); + QETH_DBF_TEXT(TRACE, 2, "REVIND"); return 1; } if (sense[SENSE_COMMAND_REJECT_BYTE] & SENSE_COMMAND_REJECT_FLAG) { - QETH_DBF_TEXT(trace, 2, "CMDREJi"); + QETH_DBF_TEXT(TRACE, 2, "CMDREJi"); return 0; } if ((sense[2] == 0xaf) && (sense[3] == 0xfe)) { - QETH_DBF_TEXT(trace, 2, "AFFE"); + QETH_DBF_TEXT(TRACE, 2, "AFFE"); return 1; } if ((!sense[0]) && (!sense[1]) && (!sense[2]) && (!sense[3])) { - QETH_DBF_TEXT(trace, 2, "ZEROSEN"); + QETH_DBF_TEXT(TRACE, 2, "ZEROSEN"); return 0; } - QETH_DBF_TEXT(trace, 2, "DGENCHK"); + QETH_DBF_TEXT(TRACE, 2, "DGENCHK"); return 1; } return 0; @@ -792,13 +789,13 @@ static long __qeth_check_irb_error(struct ccw_device *cdev, switch (PTR_ERR(irb)) { case -EIO: PRINT_WARN("i/o-error on device %s\n", cdev->dev.bus_id); - QETH_DBF_TEXT(trace, 2, "ckirberr"); - QETH_DBF_TEXT_(trace, 2, " rc%d", -EIO); + QETH_DBF_TEXT(TRACE, 2, "ckirberr"); + QETH_DBF_TEXT_(TRACE, 2, " rc%d", -EIO); break; case -ETIMEDOUT: PRINT_WARN("timeout on device %s\n", cdev->dev.bus_id); - QETH_DBF_TEXT(trace, 2, "ckirberr"); - QETH_DBF_TEXT_(trace, 2, " rc%d", -ETIMEDOUT); + QETH_DBF_TEXT(TRACE, 2, "ckirberr"); + QETH_DBF_TEXT_(TRACE, 2, " rc%d", -ETIMEDOUT); if (intparm == QETH_RCD_PARM) { struct qeth_card *card = CARD_FROM_CDEV(cdev); @@ -811,8 +808,8 @@ static long __qeth_check_irb_error(struct ccw_device *cdev, default: PRINT_WARN("unknown error %ld on device %s\n", PTR_ERR(irb), cdev->dev.bus_id); - QETH_DBF_TEXT(trace, 2, "ckirberr"); - QETH_DBF_TEXT(trace, 2, " rc???"); + QETH_DBF_TEXT(TRACE, 2, "ckirberr"); + QETH_DBF_TEXT(TRACE, 2, " rc???"); } return PTR_ERR(irb); } @@ -828,7 +825,7 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm, struct qeth_cmd_buffer *iob; __u8 index; - QETH_DBF_TEXT(trace, 5, "irq"); + QETH_DBF_TEXT(TRACE, 5, "irq"); if (__qeth_check_irb_error(cdev, intparm, irb)) return; @@ -841,13 +838,13 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm, if (card->read.ccwdev == cdev) { channel = &card->read; - QETH_DBF_TEXT(trace, 5, "read"); + QETH_DBF_TEXT(TRACE, 5, "read"); } else if (card->write.ccwdev == cdev) { channel = &card->write; - QETH_DBF_TEXT(trace, 5, "write"); + QETH_DBF_TEXT(TRACE, 5, "write"); } else { channel = &card->data; - QETH_DBF_TEXT(trace, 5, "data"); + QETH_DBF_TEXT(TRACE, 5, "data"); } atomic_set(&channel->irq_pending, 0); @@ -863,12 +860,12 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm, goto out; if (intparm == QETH_CLEAR_CHANNEL_PARM) { - QETH_DBF_TEXT(trace, 6, "clrchpar"); + QETH_DBF_TEXT(TRACE, 6, "clrchpar"); /* we don't have to handle this further */ intparm = 0; } if (intparm == QETH_HALT_CHANNEL_PARM) { - QETH_DBF_TEXT(trace, 6, "hltchpar"); + QETH_DBF_TEXT(TRACE, 6, "hltchpar"); /* we don't have to handle this further */ intparm = 0; } @@ -954,7 +951,7 @@ void qeth_clear_qdio_buffers(struct qeth_card *card) { int i, j; - QETH_DBF_TEXT(trace, 2, "clearqdbf"); + QETH_DBF_TEXT(TRACE, 2, "clearqdbf"); /* clear outbound buffers to free skbs */ for (i = 0; i < card->qdio.no_out_queues; ++i) if (card->qdio.out_qs[i]) { @@ -969,7 +966,7 @@ static void qeth_free_buffer_pool(struct qeth_card *card) { struct qeth_buffer_pool_entry *pool_entry, *tmp; int i = 0; - QETH_DBF_TEXT(trace, 5, "freepool"); + QETH_DBF_TEXT(TRACE, 5, "freepool"); list_for_each_entry_safe(pool_entry, tmp, &card->qdio.init_pool.entry_list, init_list){ for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) @@ -983,7 +980,7 @@ static void qeth_free_qdio_buffers(struct qeth_card *card) { int i, j; - QETH_DBF_TEXT(trace, 2, "freeqdbf"); + QETH_DBF_TEXT(TRACE, 2, "freeqdbf"); if (atomic_xchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == QETH_QDIO_UNINITIALIZED) return; @@ -1008,7 +1005,7 @@ static void qeth_clean_channel(struct qeth_channel *channel) { int cnt; - QETH_DBF_TEXT(setup, 2, "freech"); + QETH_DBF_TEXT(SETUP, 2, "freech"); for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) kfree(channel->iob[cnt].data); } @@ -1028,7 +1025,7 @@ static int qeth_is_1920_device(struct qeth_card *card) u8 chpp; } *chp_dsc; - QETH_DBF_TEXT(setup, 2, "chk_1920"); + QETH_DBF_TEXT(SETUP, 2, "chk_1920"); ccwdev = card->data.ccwdev; chp_dsc = (struct channelPath_dsc *)ccw_device_get_chp_desc(ccwdev, 0); @@ -1037,13 +1034,13 @@ static int qeth_is_1920_device(struct qeth_card *card) single_queue = ((chp_dsc->chpp & 0x02) == 0x02); kfree(chp_dsc); } - QETH_DBF_TEXT_(setup, 2, "rc:%x", single_queue); + QETH_DBF_TEXT_(SETUP, 2, "rc:%x", single_queue); return single_queue; } static void qeth_init_qdio_info(struct qeth_card *card) { - QETH_DBF_TEXT(setup, 4, "intqdinf"); + QETH_DBF_TEXT(SETUP, 4, "intqdinf"); atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); /* inbound */ card->qdio.in_buf_size = QETH_IN_BUF_SIZE_DEFAULT; @@ -1073,7 +1070,7 @@ static int qeth_do_start_thread(struct qeth_card *card, unsigned long thread) int rc = 0; spin_lock_irqsave(&card->thread_mask_lock, flags); - QETH_DBF_TEXT_(trace, 4, " %02x%02x%02x", + QETH_DBF_TEXT_(TRACE, 4, " %02x%02x%02x", (u8) card->thread_start_mask, (u8) card->thread_allowed_mask, (u8) card->thread_running_mask); @@ -1086,7 +1083,7 @@ static void qeth_start_kernel_thread(struct work_struct *work) { struct qeth_card *card = container_of(work, struct qeth_card, kernel_thread_starter); - QETH_DBF_TEXT(trace , 2, "strthrd"); + QETH_DBF_TEXT(TRACE , 2, "strthrd"); if (card->read.state != CH_STATE_UP && card->write.state != CH_STATE_UP) @@ -1099,8 +1096,8 @@ static void qeth_start_kernel_thread(struct work_struct *work) static int qeth_setup_card(struct qeth_card *card) { - QETH_DBF_TEXT(setup, 2, "setupcrd"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 2, "setupcrd"); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); card->read.state = CH_STATE_DOWN; card->write.state = CH_STATE_DOWN; @@ -1122,7 +1119,7 @@ static int qeth_setup_card(struct qeth_card *card) INIT_LIST_HEAD(&card->ip_list); card->ip_tbd_list = kmalloc(sizeof(struct list_head), GFP_KERNEL); if (!card->ip_tbd_list) { - QETH_DBF_TEXT(setup, 0, "iptbdnom"); + QETH_DBF_TEXT(SETUP, 0, "iptbdnom"); return -ENOMEM; } INIT_LIST_HEAD(card->ip_tbd_list); @@ -1144,11 +1141,11 @@ static struct qeth_card *qeth_alloc_card(void) { struct qeth_card *card; - QETH_DBF_TEXT(setup, 2, "alloccrd"); + QETH_DBF_TEXT(SETUP, 2, "alloccrd"); card = kzalloc(sizeof(struct qeth_card), GFP_DMA|GFP_KERNEL); if (!card) return NULL; - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); if (qeth_setup_channel(&card->read)) { kfree(card); return NULL; @@ -1166,7 +1163,7 @@ static int qeth_determine_card_type(struct qeth_card *card) { int i = 0; - QETH_DBF_TEXT(setup, 2, "detcdtyp"); + QETH_DBF_TEXT(SETUP, 2, "detcdtyp"); card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT; card->qdio.default_out_queue = QETH_DEFAULT_QUEUE; @@ -1197,7 +1194,7 @@ static int qeth_clear_channel(struct qeth_channel *channel) struct qeth_card *card; int rc; - QETH_DBF_TEXT(trace, 3, "clearch"); + QETH_DBF_TEXT(TRACE, 3, "clearch"); card = CARD_FROM_CDEV(channel->ccwdev); spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); rc = ccw_device_clear(channel->ccwdev, QETH_CLEAR_CHANNEL_PARM); @@ -1221,7 +1218,7 @@ static int qeth_halt_channel(struct qeth_channel *channel) struct qeth_card *card; int rc; - QETH_DBF_TEXT(trace, 3, "haltch"); + QETH_DBF_TEXT(TRACE, 3, "haltch"); card = CARD_FROM_CDEV(channel->ccwdev); spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); rc = ccw_device_halt(channel->ccwdev, QETH_HALT_CHANNEL_PARM); @@ -1242,7 +1239,7 @@ static int qeth_halt_channels(struct qeth_card *card) { int rc1 = 0, rc2 = 0, rc3 = 0; - QETH_DBF_TEXT(trace, 3, "haltchs"); + QETH_DBF_TEXT(TRACE, 3, "haltchs"); rc1 = qeth_halt_channel(&card->read); rc2 = qeth_halt_channel(&card->write); rc3 = qeth_halt_channel(&card->data); @@ -1257,7 +1254,7 @@ static int qeth_clear_channels(struct qeth_card *card) { int rc1 = 0, rc2 = 0, rc3 = 0; - QETH_DBF_TEXT(trace, 3, "clearchs"); + QETH_DBF_TEXT(TRACE, 3, "clearchs"); rc1 = qeth_clear_channel(&card->read); rc2 = qeth_clear_channel(&card->write); rc3 = qeth_clear_channel(&card->data); @@ -1272,8 +1269,8 @@ static int qeth_clear_halt_card(struct qeth_card *card, int halt) { int rc = 0; - QETH_DBF_TEXT(trace, 3, "clhacrd"); - QETH_DBF_HEX(trace, 3, &card, sizeof(void *)); + QETH_DBF_TEXT(TRACE, 3, "clhacrd"); + QETH_DBF_HEX(TRACE, 3, &card, sizeof(void *)); if (halt) rc = qeth_halt_channels(card); @@ -1286,7 +1283,7 @@ int qeth_qdio_clear_card(struct qeth_card *card, int use_halt) { int rc = 0; - QETH_DBF_TEXT(trace, 3, "qdioclr"); + QETH_DBF_TEXT(TRACE, 3, "qdioclr"); switch (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_ESTABLISHED, QETH_QDIO_CLEANING)) { case QETH_QDIO_ESTABLISHED: @@ -1297,7 +1294,7 @@ int qeth_qdio_clear_card(struct qeth_card *card, int use_halt) rc = qdio_cleanup(CARD_DDEV(card), QDIO_FLAG_CLEANUP_USING_CLEAR); if (rc) - QETH_DBF_TEXT_(trace, 3, "1err%d", rc); + QETH_DBF_TEXT_(TRACE, 3, "1err%d", rc); atomic_set(&card->qdio.state, QETH_QDIO_ALLOCATED); break; case QETH_QDIO_CLEANING: @@ -1307,7 +1304,7 @@ int qeth_qdio_clear_card(struct qeth_card *card, int use_halt) } rc = qeth_clear_halt_card(card, use_halt); if (rc) - QETH_DBF_TEXT_(trace, 3, "2err%d", rc); + QETH_DBF_TEXT_(TRACE, 3, "2err%d", rc); card->state = CARD_STATE_DOWN; return rc; } @@ -1367,7 +1364,7 @@ static int qeth_get_unitaddr(struct qeth_card *card) char *prcd; int rc; - QETH_DBF_TEXT(setup, 2, "getunit"); + QETH_DBF_TEXT(SETUP, 2, "getunit"); rc = qeth_read_conf_data(card, (void **) &prcd, &length); if (rc) { PRINT_ERR("qeth_read_conf_data for device %s returned %i\n", @@ -1428,7 +1425,7 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel, int rc; struct qeth_card *card; - QETH_DBF_TEXT(setup, 2, "idxanswr"); + QETH_DBF_TEXT(SETUP, 2, "idxanswr"); card = CARD_FROM_CDEV(channel->ccwdev); iob = qeth_get_buffer(channel); iob->callback = idx_reply_cb; @@ -1438,7 +1435,7 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel, wait_event(card->wait_q, atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); - QETH_DBF_TEXT(setup, 6, "noirqpnd"); + QETH_DBF_TEXT(SETUP, 6, "noirqpnd"); spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); rc = ccw_device_start(channel->ccwdev, &channel->ccw, (addr_t) iob, 0, 0); @@ -1446,7 +1443,7 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel, if (rc) { PRINT_ERR("Error2 in activating channel rc=%d\n", rc); - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); atomic_set(&channel->irq_pending, 0); wake_up(&card->wait_q); return rc; @@ -1457,7 +1454,7 @@ static int qeth_idx_activate_get_answer(struct qeth_channel *channel, return rc; if (channel->state != CH_STATE_UP) { rc = -ETIME; - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc); qeth_clear_cmd_buffers(channel); } else rc = 0; @@ -1477,7 +1474,7 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel, card = CARD_FROM_CDEV(channel->ccwdev); - QETH_DBF_TEXT(setup, 2, "idxactch"); + QETH_DBF_TEXT(SETUP, 2, "idxactch"); iob = qeth_get_buffer(channel); iob->callback = idx_reply_cb; @@ -1507,7 +1504,7 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel, wait_event(card->wait_q, atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); - QETH_DBF_TEXT(setup, 6, "noirqpnd"); + QETH_DBF_TEXT(SETUP, 6, "noirqpnd"); spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); rc = ccw_device_start(channel->ccwdev, &channel->ccw, (addr_t) iob, 0, 0); @@ -1515,7 +1512,7 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel, if (rc) { PRINT_ERR("Error1 in activating channel. rc=%d\n", rc); - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); atomic_set(&channel->irq_pending, 0); wake_up(&card->wait_q); return rc; @@ -1526,7 +1523,7 @@ static int qeth_idx_activate_channel(struct qeth_channel *channel, return rc; if (channel->state != CH_STATE_ACTIVATING) { PRINT_WARN("IDX activate timed out!\n"); - QETH_DBF_TEXT_(setup, 2, "2err%d", -ETIME); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", -ETIME); qeth_clear_cmd_buffers(channel); return -ETIME; } @@ -1548,7 +1545,7 @@ static void qeth_idx_write_cb(struct qeth_channel *channel, struct qeth_card *card; __u16 temp; - QETH_DBF_TEXT(setup , 2, "idxwrcb"); + QETH_DBF_TEXT(SETUP , 2, "idxwrcb"); if (channel->state == CH_STATE_DOWN) { channel->state = CH_STATE_ACTIVATING; @@ -1585,7 +1582,7 @@ static void qeth_idx_read_cb(struct qeth_channel *channel, struct qeth_card *card; __u16 temp; - QETH_DBF_TEXT(setup , 2, "idxrdcb"); + QETH_DBF_TEXT(SETUP , 2, "idxrdcb"); if (channel->state == CH_STATE_DOWN) { channel->state = CH_STATE_ACTIVATING; goto out; @@ -1645,7 +1642,7 @@ void qeth_prepare_control_data(struct qeth_card *card, int len, card->seqno.pdu_hdr++; memcpy(QETH_PDU_HEADER_ACK_SEQ_NO(iob->data), &card->seqno.pdu_hdr_ack, QETH_SEQ_NO_LENGTH); - QETH_DBF_HEX(control, 2, iob->data, QETH_DBF_CONTROL_LEN); + QETH_DBF_HEX(CTRL, 2, iob->data, QETH_DBF_CTRL_LEN); } EXPORT_SYMBOL_GPL(qeth_prepare_control_data); @@ -1660,11 +1657,11 @@ int qeth_send_control_data(struct qeth_card *card, int len, struct qeth_reply *reply = NULL; unsigned long timeout; - QETH_DBF_TEXT(trace, 2, "sendctl"); + QETH_DBF_TEXT(TRACE, 2, "sendctl"); reply = qeth_alloc_reply(card); if (!reply) { - PRINT_WARN("Could no alloc qeth_reply!\n"); + PRINT_WARN("Could not alloc qeth_reply!\n"); return -ENOMEM; } reply->callback = reply_cb; @@ -1677,7 +1674,7 @@ int qeth_send_control_data(struct qeth_card *card, int len, spin_lock_irqsave(&card->lock, flags); list_add_tail(&reply->list, &card->cmd_waiter_list); spin_unlock_irqrestore(&card->lock, flags); - QETH_DBF_HEX(control, 2, iob->data, QETH_DBF_CONTROL_LEN); + QETH_DBF_HEX(CTRL, 2, iob->data, QETH_DBF_CTRL_LEN); while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ; qeth_prepare_control_data(card, len, iob); @@ -1687,7 +1684,7 @@ int qeth_send_control_data(struct qeth_card *card, int len, else timeout = jiffies + QETH_TIMEOUT; - QETH_DBF_TEXT(trace, 6, "noirqpnd"); + QETH_DBF_TEXT(TRACE, 6, "noirqpnd"); spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, (addr_t) iob, 0, 0); @@ -1695,7 +1692,7 @@ int qeth_send_control_data(struct qeth_card *card, int len, if (rc) { PRINT_WARN("qeth_send_control_data: " "ccw_device_start rc = %i\n", rc); - QETH_DBF_TEXT_(trace, 2, " err%d", rc); + QETH_DBF_TEXT_(TRACE, 2, " err%d", rc); spin_lock_irqsave(&card->lock, flags); list_del_init(&reply->list); qeth_put_reply(reply); @@ -1727,13 +1724,13 @@ static int qeth_cm_enable_cb(struct qeth_card *card, struct qeth_reply *reply, { struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "cmenblcb"); + QETH_DBF_TEXT(SETUP, 2, "cmenblcb"); iob = (struct qeth_cmd_buffer *) data; memcpy(&card->token.cm_filter_r, QETH_CM_ENABLE_RESP_FILTER_TOKEN(iob->data), QETH_MPC_TOKEN_LENGTH); - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc); return 0; } @@ -1742,7 +1739,7 @@ static int qeth_cm_enable(struct qeth_card *card) int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "cmenable"); + QETH_DBF_TEXT(SETUP, 2, "cmenable"); iob = qeth_wait_for_buffer(&card->write); memcpy(iob->data, CM_ENABLE, CM_ENABLE_SIZE); @@ -1762,13 +1759,13 @@ static int qeth_cm_setup_cb(struct qeth_card *card, struct qeth_reply *reply, struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "cmsetpcb"); + QETH_DBF_TEXT(SETUP, 2, "cmsetpcb"); iob = (struct qeth_cmd_buffer *) data; memcpy(&card->token.cm_connection_r, QETH_CM_SETUP_RESP_DEST_ADDR(iob->data), QETH_MPC_TOKEN_LENGTH); - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc); return 0; } @@ -1777,7 +1774,7 @@ static int qeth_cm_setup(struct qeth_card *card) int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "cmsetup"); + QETH_DBF_TEXT(SETUP, 2, "cmsetup"); iob = qeth_wait_for_buffer(&card->write); memcpy(iob->data, CM_SETUP, CM_SETUP_SIZE); @@ -1878,7 +1875,7 @@ static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply, __u8 link_type; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "ulpenacb"); + QETH_DBF_TEXT(SETUP, 2, "ulpenacb"); iob = (struct qeth_cmd_buffer *) data; memcpy(&card->token.ulp_filter_r, @@ -1889,7 +1886,7 @@ static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply, mtu = qeth_get_mtu_outof_framesize(framesize); if (!mtu) { iob->rc = -EINVAL; - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc); return 0; } card->info.max_mtu = mtu; @@ -1908,7 +1905,7 @@ static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply, card->info.link_type = link_type; } else card->info.link_type = 0; - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc); return 0; } @@ -1919,7 +1916,7 @@ static int qeth_ulp_enable(struct qeth_card *card) struct qeth_cmd_buffer *iob; /*FIXME: trace view callbacks*/ - QETH_DBF_TEXT(setup, 2, "ulpenabl"); + QETH_DBF_TEXT(SETUP, 2, "ulpenabl"); iob = qeth_wait_for_buffer(&card->write); memcpy(iob->data, ULP_ENABLE, ULP_ENABLE_SIZE); @@ -1952,13 +1949,13 @@ static int qeth_ulp_setup_cb(struct qeth_card *card, struct qeth_reply *reply, { struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "ulpstpcb"); + QETH_DBF_TEXT(SETUP, 2, "ulpstpcb"); iob = (struct qeth_cmd_buffer *) data; memcpy(&card->token.ulp_connection_r, QETH_ULP_SETUP_RESP_CONNECTION_TOKEN(iob->data), QETH_MPC_TOKEN_LENGTH); - QETH_DBF_TEXT_(setup, 2, " rc%d", iob->rc); + QETH_DBF_TEXT_(SETUP, 2, " rc%d", iob->rc); return 0; } @@ -1969,7 +1966,7 @@ static int qeth_ulp_setup(struct qeth_card *card) struct qeth_cmd_buffer *iob; struct ccw_dev_id dev_id; - QETH_DBF_TEXT(setup, 2, "ulpsetup"); + QETH_DBF_TEXT(SETUP, 2, "ulpsetup"); iob = qeth_wait_for_buffer(&card->write); memcpy(iob->data, ULP_SETUP, ULP_SETUP_SIZE); @@ -1994,7 +1991,7 @@ static int qeth_alloc_qdio_buffers(struct qeth_card *card) { int i, j; - QETH_DBF_TEXT(setup, 2, "allcqdbf"); + QETH_DBF_TEXT(SETUP, 2, "allcqdbf"); if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED, QETH_QDIO_ALLOCATED) != QETH_QDIO_UNINITIALIZED) @@ -2004,8 +2001,8 @@ static int qeth_alloc_qdio_buffers(struct qeth_card *card) GFP_KERNEL); if (!card->qdio.in_q) goto out_nomem; - QETH_DBF_TEXT(setup, 2, "inq"); - QETH_DBF_HEX(setup, 2, &card->qdio.in_q, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 2, "inq"); + QETH_DBF_HEX(SETUP, 2, &card->qdio.in_q, sizeof(void *)); memset(card->qdio.in_q, 0, sizeof(struct qeth_qdio_q)); /* give inbound qeth_qdio_buffers their qdio_buffers */ for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; ++i) @@ -2025,8 +2022,8 @@ static int qeth_alloc_qdio_buffers(struct qeth_card *card) GFP_KERNEL); if (!card->qdio.out_qs[i]) goto out_freeoutq; - QETH_DBF_TEXT_(setup, 2, "outq %i", i); - QETH_DBF_HEX(setup, 2, &card->qdio.out_qs[i], sizeof(void *)); + QETH_DBF_TEXT_(SETUP, 2, "outq %i", i); + QETH_DBF_HEX(SETUP, 2, &card->qdio.out_qs[i], sizeof(void *)); memset(card->qdio.out_qs[i], 0, sizeof(struct qeth_qdio_out_q)); card->qdio.out_qs[i]->queue_no = i; /* give outbound qeth_qdio_buffers their qdio_buffers */ @@ -2086,7 +2083,7 @@ static void qeth_create_qib_param_field_blkt(struct qeth_card *card, static int qeth_qdio_activate(struct qeth_card *card) { - QETH_DBF_TEXT(setup, 3, "qdioact"); + QETH_DBF_TEXT(SETUP, 3, "qdioact"); return qdio_activate(CARD_DDEV(card), 0); } @@ -2095,7 +2092,7 @@ static int qeth_dm_act(struct qeth_card *card) int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(setup, 2, "dmact"); + QETH_DBF_TEXT(SETUP, 2, "dmact"); iob = qeth_wait_for_buffer(&card->write); memcpy(iob->data, DM_ACT, DM_ACT_SIZE); @@ -2112,52 +2109,52 @@ static int qeth_mpc_initialize(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(setup, 2, "mpcinit"); + QETH_DBF_TEXT(SETUP, 2, "mpcinit"); rc = qeth_issue_next_read(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return rc; } rc = qeth_cm_enable(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); goto out_qdio; } rc = qeth_cm_setup(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc); goto out_qdio; } rc = qeth_ulp_enable(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "4err%d", rc); goto out_qdio; } rc = qeth_ulp_setup(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc); goto out_qdio; } rc = qeth_alloc_qdio_buffers(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc); goto out_qdio; } rc = qeth_qdio_establish(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "6err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc); qeth_free_qdio_buffers(card); goto out_qdio; } rc = qeth_qdio_activate(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "7err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "7err%d", rc); goto out_qdio; } rc = qeth_dm_act(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "8err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "8err%d", rc); goto out_qdio; } @@ -2261,7 +2258,7 @@ EXPORT_SYMBOL_GPL(qeth_print_status_message); void qeth_put_buffer_pool_entry(struct qeth_card *card, struct qeth_buffer_pool_entry *entry) { - QETH_DBF_TEXT(trace, 6, "ptbfplen"); + QETH_DBF_TEXT(TRACE, 6, "ptbfplen"); list_add_tail(&entry->list, &card->qdio.in_buf_pool.entry_list); } EXPORT_SYMBOL_GPL(qeth_put_buffer_pool_entry); @@ -2270,7 +2267,7 @@ static void qeth_initialize_working_pool_list(struct qeth_card *card) { struct qeth_buffer_pool_entry *entry; - QETH_DBF_TEXT(trace, 5, "inwrklst"); + QETH_DBF_TEXT(TRACE, 5, "inwrklst"); list_for_each_entry(entry, &card->qdio.init_pool.entry_list, init_list) { @@ -2359,7 +2356,7 @@ int qeth_init_qdio_queues(struct qeth_card *card) int i, j; int rc; - QETH_DBF_TEXT(setup, 2, "initqdqs"); + QETH_DBF_TEXT(SETUP, 2, "initqdqs"); /* inbound queue */ memset(card->qdio.in_q->qdio_bufs, 0, @@ -2373,12 +2370,12 @@ int qeth_init_qdio_queues(struct qeth_card *card) rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0, card->qdio.in_buf_pool.buf_count - 1, NULL); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return rc; } rc = qdio_synchronize(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0); if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); return rc; } /* outbound queue */ @@ -2462,11 +2459,8 @@ int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, { int rc; char prot_type; - int cmd; - cmd = ((struct qeth_ipa_cmd *) - (iob->data+IPA_PDU_HEADER_SIZE))->hdr.command; - QETH_DBF_TEXT(trace, 4, "sendipa"); + QETH_DBF_TEXT(TRACE, 4, "sendipa"); if (card->options.layer2) if (card->info.type == QETH_CARD_TYPE_OSN) @@ -2476,14 +2470,8 @@ int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, else prot_type = QETH_PROT_TCPIP; qeth_prepare_ipa_cmd(card, iob, prot_type); - rc = qeth_send_control_data(card, IPA_CMD_LENGTH, iob, - reply_cb, reply_param); - if (rc != 0) { - char *ipa_cmd_name; - ipa_cmd_name = qeth_get_ipa_cmd_name(cmd); - PRINT_ERR("%s %s(%x) returned %s(%x)\n", __FUNCTION__, - ipa_cmd_name, cmd, qeth_get_ipa_msg(rc), rc); - } + rc = qeth_send_control_data(card, IPA_CMD_LENGTH, + iob, reply_cb, reply_param); return rc; } EXPORT_SYMBOL_GPL(qeth_send_ipa_cmd); @@ -2504,7 +2492,7 @@ int qeth_send_startlan(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(setup, 2, "strtlan"); + QETH_DBF_TEXT(SETUP, 2, "strtlan"); rc = qeth_send_startstoplan(card, IPA_CMD_STARTLAN, 0); return rc; @@ -2520,7 +2508,7 @@ int qeth_send_stoplan(struct qeth_card *card) * TCP/IP (we!) never issue a STOPLAN * is this right ?!? */ - QETH_DBF_TEXT(setup, 2, "stoplan"); + QETH_DBF_TEXT(SETUP, 2, "stoplan"); rc = qeth_send_startstoplan(card, IPA_CMD_STOPLAN, 0); return rc; @@ -2532,7 +2520,7 @@ int qeth_default_setadapterparms_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "defadpcb"); + QETH_DBF_TEXT(TRACE, 4, "defadpcb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.return_code == 0) @@ -2547,7 +2535,7 @@ static int qeth_query_setadapterparms_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 3, "quyadpcb"); + QETH_DBF_TEXT(TRACE, 3, "quyadpcb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->data.setadapterparms.data.query_cmds_supp.lan_type & 0x7f) @@ -2581,7 +2569,7 @@ int qeth_query_setadapterparms(struct qeth_card *card) int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(trace, 3, "queryadp"); + QETH_DBF_TEXT(TRACE, 3, "queryadp"); iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_COMMANDS_SUPPORTED, sizeof(struct qeth_ipacmd_setadpparms)); rc = qeth_send_ipa_cmd(card, iob, qeth_query_setadapterparms_cb, NULL); @@ -2593,14 +2581,14 @@ int qeth_check_qdio_errors(struct qdio_buffer *buf, unsigned int qdio_error, unsigned int siga_error, const char *dbftext) { if (qdio_error || siga_error) { - QETH_DBF_TEXT(trace, 2, dbftext); - QETH_DBF_TEXT(qerr, 2, dbftext); - QETH_DBF_TEXT_(qerr, 2, " F15=%02X", + QETH_DBF_TEXT(TRACE, 2, dbftext); + QETH_DBF_TEXT(QERR, 2, dbftext); + QETH_DBF_TEXT_(QERR, 2, " F15=%02X", buf->element[15].flags & 0xff); - QETH_DBF_TEXT_(qerr, 2, " F14=%02X", + QETH_DBF_TEXT_(QERR, 2, " F14=%02X", buf->element[14].flags & 0xff); - QETH_DBF_TEXT_(qerr, 2, " qerr=%X", qdio_error); - QETH_DBF_TEXT_(qerr, 2, " serr=%X", siga_error); + QETH_DBF_TEXT_(QERR, 2, " qerr=%X", qdio_error); + QETH_DBF_TEXT_(QERR, 2, " serr=%X", siga_error); return 1; } return 0; @@ -2615,7 +2603,7 @@ void qeth_queue_input_buffer(struct qeth_card *card, int index) int rc; int newcount = 0; - QETH_DBF_TEXT(trace, 6, "queinbuf"); + QETH_DBF_TEXT(TRACE, 6, "queinbuf"); count = (index < queue->next_buf_to_init)? card->qdio.in_buf_pool.buf_count - (queue->next_buf_to_init - index) : @@ -2671,8 +2659,8 @@ void qeth_queue_input_buffer(struct qeth_card *card, int index) PRINT_WARN("qeth_queue_input_buffer's do_QDIO " "return %i (device %s).\n", rc, CARD_DDEV_ID(card)); - QETH_DBF_TEXT(trace, 2, "qinberr"); - QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT(TRACE, 2, "qinberr"); + QETH_DBF_TEXT_(TRACE, 2, "%s", CARD_BUS_ID(card)); } queue->next_buf_to_init = (queue->next_buf_to_init + count) % QDIO_MAX_BUFFERS_PER_Q; @@ -2687,22 +2675,22 @@ static int qeth_handle_send_error(struct qeth_card *card, int sbalf15 = buffer->buffer->element[15].flags & 0xff; int cc = siga_err & 3; - QETH_DBF_TEXT(trace, 6, "hdsnderr"); + QETH_DBF_TEXT(TRACE, 6, "hdsnderr"); qeth_check_qdio_errors(buffer->buffer, qdio_err, siga_err, "qouterr"); switch (cc) { case 0: if (qdio_err) { - QETH_DBF_TEXT(trace, 1, "lnkfail"); - QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 1, "%04x %02x", + QETH_DBF_TEXT(TRACE, 1, "lnkfail"); + QETH_DBF_TEXT_(TRACE, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(TRACE, 1, "%04x %02x", (u16)qdio_err, (u8)sbalf15); return QETH_SEND_ERROR_LINK_FAILURE; } return QETH_SEND_ERROR_NONE; case 2: if (siga_err & QDIO_SIGA_ERROR_B_BIT_SET) { - QETH_DBF_TEXT(trace, 1, "SIGAcc2B"); - QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT(TRACE, 1, "SIGAcc2B"); + QETH_DBF_TEXT_(TRACE, 1, "%s", CARD_BUS_ID(card)); return QETH_SEND_ERROR_KICK_IT; } if ((sbalf15 >= 15) && (sbalf15 <= 31)) @@ -2710,13 +2698,13 @@ static int qeth_handle_send_error(struct qeth_card *card, return QETH_SEND_ERROR_LINK_FAILURE; /* look at qdio_error and sbalf 15 */ case 1: - QETH_DBF_TEXT(trace, 1, "SIGAcc1"); - QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT(TRACE, 1, "SIGAcc1"); + QETH_DBF_TEXT_(TRACE, 1, "%s", CARD_BUS_ID(card)); return QETH_SEND_ERROR_LINK_FAILURE; case 3: default: - QETH_DBF_TEXT(trace, 1, "SIGAcc3"); - QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT(TRACE, 1, "SIGAcc3"); + QETH_DBF_TEXT_(TRACE, 1, "%s", CARD_BUS_ID(card)); return QETH_SEND_ERROR_KICK_IT; } } @@ -2731,7 +2719,7 @@ static void qeth_switch_to_packing_if_needed(struct qeth_qdio_out_q *queue) if (atomic_read(&queue->used_buffers) >= QETH_HIGH_WATERMARK_PACK){ /* switch non-PACKING -> PACKING */ - QETH_DBF_TEXT(trace, 6, "np->pack"); + QETH_DBF_TEXT(TRACE, 6, "np->pack"); if (queue->card->options.performance_stats) queue->card->perf_stats.sc_dp_p++; queue->do_pack = 1; @@ -2754,7 +2742,7 @@ static int qeth_switch_to_nonpacking_if_needed(struct qeth_qdio_out_q *queue) if (atomic_read(&queue->used_buffers) <= QETH_LOW_WATERMARK_PACK) { /* switch PACKING -> non-PACKING */ - QETH_DBF_TEXT(trace, 6, "pack->np"); + QETH_DBF_TEXT(TRACE, 6, "pack->np"); if (queue->card->options.performance_stats) queue->card->perf_stats.sc_p_dp++; queue->do_pack = 0; @@ -2804,7 +2792,7 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int under_int, int i; unsigned int qdio_flags; - QETH_DBF_TEXT(trace, 6, "flushbuf"); + QETH_DBF_TEXT(TRACE, 6, "flushbuf"); for (i = index; i < index + count; ++i) { buf = &queue->bufs[i % QDIO_MAX_BUFFERS_PER_Q]; @@ -2858,9 +2846,9 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int under_int, qeth_get_micros() - queue->card->perf_stats.outbound_do_qdio_start_time; if (rc) { - QETH_DBF_TEXT(trace, 2, "flushbuf"); - QETH_DBF_TEXT_(trace, 2, " err%d", rc); - QETH_DBF_TEXT_(trace, 2, "%s", CARD_DDEV_ID(queue->card)); + QETH_DBF_TEXT(TRACE, 2, "flushbuf"); + QETH_DBF_TEXT_(TRACE, 2, " err%d", rc); + QETH_DBF_TEXT_(TRACE, 2, "%s", CARD_DDEV_ID(queue->card)); queue->card->stats.tx_errors += count; /* this must not happen under normal circumstances. if it * happens something is really wrong -> recover */ @@ -2922,12 +2910,12 @@ void qeth_qdio_output_handler(struct ccw_device *ccwdev, unsigned int status, struct qeth_qdio_out_buffer *buffer; int i; - QETH_DBF_TEXT(trace, 6, "qdouhdl"); + QETH_DBF_TEXT(TRACE, 6, "qdouhdl"); if (status & QDIO_STATUS_LOOK_FOR_ERROR) { if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION) { - QETH_DBF_TEXT(trace, 2, "achkcond"); - QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 2, "%08x", status); + QETH_DBF_TEXT(TRACE, 2, "achkcond"); + QETH_DBF_TEXT_(TRACE, 2, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(TRACE, 2, "%08x", status); netif_stop_queue(card->dev); qeth_schedule_recovery(card); return; @@ -3075,7 +3063,7 @@ struct sk_buff *qeth_prepare_skb(struct qeth_card *card, struct sk_buff *skb, { struct sk_buff *new_skb; - QETH_DBF_TEXT(trace, 6, "prepskb"); + QETH_DBF_TEXT(TRACE, 6, "prepskb"); new_skb = qeth_realloc_headroom(card, skb, sizeof(struct qeth_hdr)); @@ -3162,7 +3150,7 @@ static int qeth_fill_buffer(struct qeth_qdio_out_q *queue, struct qeth_hdr_tso *hdr; int flush_cnt = 0, hdr_len, large_send = 0; - QETH_DBF_TEXT(trace, 6, "qdfillbf"); + QETH_DBF_TEXT(TRACE, 6, "qdfillbf"); buffer = buf->buffer; atomic_inc(&skb->users); @@ -3191,12 +3179,12 @@ static int qeth_fill_buffer(struct qeth_qdio_out_q *queue, (int *)&buf->next_element_to_fill); if (!queue->do_pack) { - QETH_DBF_TEXT(trace, 6, "fillbfnp"); + QETH_DBF_TEXT(TRACE, 6, "fillbfnp"); /* set state to PRIMED -> will be flushed */ atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); flush_cnt = 1; } else { - QETH_DBF_TEXT(trace, 6, "fillbfpa"); + QETH_DBF_TEXT(TRACE, 6, "fillbfpa"); if (queue->card->options.performance_stats) queue->card->perf_stats.skbs_sent_pack++; if (buf->next_element_to_fill >= @@ -3222,7 +3210,7 @@ int qeth_do_send_packet_fast(struct qeth_card *card, int flush_cnt = 0; int index; - QETH_DBF_TEXT(trace, 6, "dosndpfa"); + QETH_DBF_TEXT(TRACE, 6, "dosndpfa"); /* spin until we get the queue ... */ while (atomic_cmpxchg(&queue->state, QETH_OUT_Q_UNLOCKED, @@ -3275,7 +3263,7 @@ int qeth_do_send_packet(struct qeth_card *card, struct qeth_qdio_out_q *queue, int tmp; int rc = 0; - QETH_DBF_TEXT(trace, 6, "dosndpkt"); + QETH_DBF_TEXT(TRACE, 6, "dosndpkt"); /* spin until we get the queue ... */ while (atomic_cmpxchg(&queue->state, QETH_OUT_Q_UNLOCKED, @@ -3382,14 +3370,14 @@ static int qeth_setadp_promisc_mode_cb(struct qeth_card *card, struct qeth_ipa_cmd *cmd; struct qeth_ipacmd_setadpparms *setparms; - QETH_DBF_TEXT(trace, 4, "prmadpcb"); + QETH_DBF_TEXT(TRACE, 4, "prmadpcb"); cmd = (struct qeth_ipa_cmd *) data; setparms = &(cmd->data.setadapterparms); qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 4, "prmrc%2.2x", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 4, "prmrc%2.2x", cmd->hdr.return_code); setparms->data.mode = SET_PROMISC_MODE_OFF; } card->info.promisc_mode = setparms->data.mode; @@ -3403,7 +3391,7 @@ void qeth_setadp_promisc_mode(struct qeth_card *card) struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "setprom"); + QETH_DBF_TEXT(TRACE, 4, "setprom"); if (((dev->flags & IFF_PROMISC) && (card->info.promisc_mode == SET_PROMISC_MODE_ON)) || @@ -3413,7 +3401,7 @@ void qeth_setadp_promisc_mode(struct qeth_card *card) mode = SET_PROMISC_MODE_OFF; if (dev->flags & IFF_PROMISC) mode = SET_PROMISC_MODE_ON; - QETH_DBF_TEXT_(trace, 4, "mode:%x", mode); + QETH_DBF_TEXT_(TRACE, 4, "mode:%x", mode); iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_PROMISC_MODE, sizeof(struct qeth_ipacmd_setadpparms)); @@ -3430,9 +3418,9 @@ int qeth_change_mtu(struct net_device *dev, int new_mtu) card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 4, "chgmtu"); + QETH_DBF_TEXT(TRACE, 4, "chgmtu"); sprintf(dbf_text, "%8x", new_mtu); - QETH_DBF_TEXT(trace, 4, dbf_text); + QETH_DBF_TEXT(TRACE, 4, dbf_text); if (new_mtu < 64) return -EINVAL; @@ -3452,7 +3440,7 @@ struct net_device_stats *qeth_get_stats(struct net_device *dev) card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 5, "getstat"); + QETH_DBF_TEXT(TRACE, 5, "getstat"); return &card->stats; } @@ -3463,7 +3451,7 @@ static int qeth_setadpparms_change_macaddr_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "chgmaccb"); + QETH_DBF_TEXT(TRACE, 4, "chgmaccb"); cmd = (struct qeth_ipa_cmd *) data; if (!card->options.layer2 || @@ -3483,7 +3471,7 @@ int qeth_setadpparms_change_macaddr(struct qeth_card *card) struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "chgmac"); + QETH_DBF_TEXT(TRACE, 4, "chgmac"); iob = qeth_get_adapter_cmd(card, IPA_SETADP_ALTER_MAC_ADDRESS, sizeof(struct qeth_ipacmd_setadpparms)); @@ -3581,7 +3569,7 @@ static int qeth_send_ipa_snmp_cmd(struct qeth_card *card, { u16 s1, s2; - QETH_DBF_TEXT(trace, 4, "sendsnmp"); + QETH_DBF_TEXT(TRACE, 4, "sendsnmp"); memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), @@ -3606,7 +3594,7 @@ static int qeth_snmp_command_cb(struct qeth_card *card, unsigned char *data; __u16 data_len; - QETH_DBF_TEXT(trace, 3, "snpcmdcb"); + QETH_DBF_TEXT(TRACE, 3, "snpcmdcb"); cmd = (struct qeth_ipa_cmd *) sdata; data = (unsigned char *)((char *)cmd - reply->offset); @@ -3614,13 +3602,13 @@ static int qeth_snmp_command_cb(struct qeth_card *card, snmp = &cmd->data.setadapterparms.data.snmp; if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 4, "scer1%i", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 4, "scer1%i", cmd->hdr.return_code); return 0; } if (cmd->data.setadapterparms.hdr.return_code) { cmd->hdr.return_code = cmd->data.setadapterparms.hdr.return_code; - QETH_DBF_TEXT_(trace, 4, "scer2%i", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 4, "scer2%i", cmd->hdr.return_code); return 0; } data_len = *((__u16 *)QETH_IPA_PDU_LEN_PDU1(data)); @@ -3631,13 +3619,13 @@ static int qeth_snmp_command_cb(struct qeth_card *card, /* check if there is enough room in userspace */ if ((qinfo->udata_len - qinfo->udata_offset) < data_len) { - QETH_DBF_TEXT_(trace, 4, "scer3%i", -ENOMEM); + QETH_DBF_TEXT_(TRACE, 4, "scer3%i", -ENOMEM); cmd->hdr.return_code = -ENOMEM; return 0; } - QETH_DBF_TEXT_(trace, 4, "snore%i", + QETH_DBF_TEXT_(TRACE, 4, "snore%i", cmd->data.setadapterparms.hdr.used_total); - QETH_DBF_TEXT_(trace, 4, "sseqn%i", + QETH_DBF_TEXT_(TRACE, 4, "sseqn%i", cmd->data.setadapterparms.hdr.seq_no); /*copy entries to user buffer*/ if (cmd->data.setadapterparms.hdr.seq_no == 1) { @@ -3651,9 +3639,9 @@ static int qeth_snmp_command_cb(struct qeth_card *card, } qinfo->udata_offset += data_len; /* check if all replies received ... */ - QETH_DBF_TEXT_(trace, 4, "srtot%i", + QETH_DBF_TEXT_(TRACE, 4, "srtot%i", cmd->data.setadapterparms.hdr.used_total); - QETH_DBF_TEXT_(trace, 4, "srseq%i", + QETH_DBF_TEXT_(TRACE, 4, "srseq%i", cmd->data.setadapterparms.hdr.seq_no); if (cmd->data.setadapterparms.hdr.seq_no < cmd->data.setadapterparms.hdr.used_total) @@ -3670,7 +3658,7 @@ int qeth_snmp_command(struct qeth_card *card, char __user *udata) struct qeth_arp_query_info qinfo = {0, }; int rc = 0; - QETH_DBF_TEXT(trace, 3, "snmpcmd"); + QETH_DBF_TEXT(TRACE, 3, "snmpcmd"); if (card->info.guestlan) return -EOPNOTSUPP; @@ -3686,7 +3674,7 @@ int qeth_snmp_command(struct qeth_card *card, char __user *udata) return -EFAULT; ureq = kmalloc(req_len+sizeof(struct qeth_snmp_ureq_hdr), GFP_KERNEL); if (!ureq) { - QETH_DBF_TEXT(trace, 2, "snmpnome"); + QETH_DBF_TEXT(TRACE, 2, "snmpnome"); return -ENOMEM; } if (copy_from_user(ureq, udata, @@ -3741,7 +3729,7 @@ static int qeth_qdio_establish(struct qeth_card *card) int i, j, k; int rc = 0; - QETH_DBF_TEXT(setup, 2, "qdioest"); + QETH_DBF_TEXT(SETUP, 2, "qdioest"); qib_param_field = kzalloc(QDIO_MAX_BUFFERS_PER_Q * sizeof(char), GFP_KERNEL); @@ -3810,8 +3798,8 @@ static int qeth_qdio_establish(struct qeth_card *card) static void qeth_core_free_card(struct qeth_card *card) { - QETH_DBF_TEXT(setup, 2, "freecrd"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 2, "freecrd"); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); qeth_clean_channel(&card->read); qeth_clean_channel(&card->write); if (card->dev) @@ -3868,7 +3856,7 @@ int qeth_core_hardsetup_card(struct qeth_card *card) int mpno; int rc; - QETH_DBF_TEXT(setup, 2, "hrdsetup"); + QETH_DBF_TEXT(SETUP, 2, "hrdsetup"); atomic_set(&card->force_alloc_skb, 0); retry: if (retries < 3) { @@ -3882,10 +3870,10 @@ retry: } rc = qeth_qdio_clear_card(card, card->info.type != QETH_CARD_TYPE_IQD); if (rc == -ERESTARTSYS) { - QETH_DBF_TEXT(setup, 2, "break1"); + QETH_DBF_TEXT(SETUP, 2, "break1"); return rc; } else if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); if (--retries < 0) goto out; else @@ -3894,7 +3882,7 @@ retry: rc = qeth_get_unitaddr(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); return rc; } @@ -3909,10 +3897,10 @@ retry: qeth_init_func_level(card); rc = qeth_idx_activate_channel(&card->read, qeth_idx_read_cb); if (rc == -ERESTARTSYS) { - QETH_DBF_TEXT(setup, 2, "break2"); + QETH_DBF_TEXT(SETUP, 2, "break2"); return rc; } else if (rc) { - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc); if (--retries < 0) goto out; else @@ -3920,10 +3908,10 @@ retry: } rc = qeth_idx_activate_channel(&card->write, qeth_idx_write_cb); if (rc == -ERESTARTSYS) { - QETH_DBF_TEXT(setup, 2, "break3"); + QETH_DBF_TEXT(SETUP, 2, "break3"); return rc; } else if (rc) { - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "4err%d", rc); if (--retries < 0) goto out; else @@ -3931,7 +3919,7 @@ retry: } rc = qeth_mpc_initialize(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc); goto out; } return 0; @@ -3992,7 +3980,7 @@ struct sk_buff *qeth_core_get_next_skb(struct qeth_card *card, int use_rx_sg = 0; int frag = 0; - QETH_DBF_TEXT(trace, 6, "nextskb"); + QETH_DBF_TEXT(TRACE, 6, "nextskb"); /* qeth_hdr must not cross element boundaries */ if (element->length < offset + sizeof(struct qeth_hdr)) { if (qeth_is_last_sbale(element)) @@ -4048,13 +4036,13 @@ struct sk_buff *qeth_core_get_next_skb(struct qeth_card *card, skb_len -= data_len; if (skb_len) { if (qeth_is_last_sbale(element)) { - QETH_DBF_TEXT(trace, 4, "unexeob"); - QETH_DBF_TEXT_(trace, 4, "%s", + QETH_DBF_TEXT(TRACE, 4, "unexeob"); + QETH_DBF_TEXT_(TRACE, 4, "%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT(qerr, 2, "unexeob"); - QETH_DBF_TEXT_(qerr, 2, "%s", + QETH_DBF_TEXT(QERR, 2, "unexeob"); + QETH_DBF_TEXT_(QERR, 2, "%s", CARD_BUS_ID(card)); - QETH_DBF_HEX(misc, 4, buffer, sizeof(*buffer)); + QETH_DBF_HEX(MISC, 4, buffer, sizeof(*buffer)); dev_kfree_skb_any(skb); card->stats.rx_errors++; return NULL; @@ -4077,8 +4065,8 @@ no_mem: if (net_ratelimit()) { PRINT_WARN("No memory for packet received on %s.\n", QETH_CARD_IFNAME(card)); - QETH_DBF_TEXT(trace, 2, "noskbmem"); - QETH_DBF_TEXT_(trace, 2, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT(TRACE, 2, "noskbmem"); + QETH_DBF_TEXT_(TRACE, 2, "%s", CARD_BUS_ID(card)); } card->stats.rx_dropped++; return NULL; @@ -4087,80 +4075,39 @@ EXPORT_SYMBOL_GPL(qeth_core_get_next_skb); static void qeth_unregister_dbf_views(void) { - if (qeth_dbf_setup) - debug_unregister(qeth_dbf_setup); - if (qeth_dbf_qerr) - debug_unregister(qeth_dbf_qerr); - if (qeth_dbf_sense) - debug_unregister(qeth_dbf_sense); - if (qeth_dbf_misc) - debug_unregister(qeth_dbf_misc); - if (qeth_dbf_data) - debug_unregister(qeth_dbf_data); - if (qeth_dbf_control) - debug_unregister(qeth_dbf_control); - if (qeth_dbf_trace) - debug_unregister(qeth_dbf_trace); + int x; + for (x = 0; x < QETH_DBF_INFOS; x++) { + debug_unregister(qeth_dbf[x].id); + qeth_dbf[x].id = NULL; + } } static int qeth_register_dbf_views(void) { - qeth_dbf_setup = debug_register(QETH_DBF_SETUP_NAME, - QETH_DBF_SETUP_PAGES, - QETH_DBF_SETUP_NR_AREAS, - QETH_DBF_SETUP_LEN); - qeth_dbf_misc = debug_register(QETH_DBF_MISC_NAME, - QETH_DBF_MISC_PAGES, - QETH_DBF_MISC_NR_AREAS, - QETH_DBF_MISC_LEN); - qeth_dbf_data = debug_register(QETH_DBF_DATA_NAME, - QETH_DBF_DATA_PAGES, - QETH_DBF_DATA_NR_AREAS, - QETH_DBF_DATA_LEN); - qeth_dbf_control = debug_register(QETH_DBF_CONTROL_NAME, - QETH_DBF_CONTROL_PAGES, - QETH_DBF_CONTROL_NR_AREAS, - QETH_DBF_CONTROL_LEN); - qeth_dbf_sense = debug_register(QETH_DBF_SENSE_NAME, - QETH_DBF_SENSE_PAGES, - QETH_DBF_SENSE_NR_AREAS, - QETH_DBF_SENSE_LEN); - qeth_dbf_qerr = debug_register(QETH_DBF_QERR_NAME, - QETH_DBF_QERR_PAGES, - QETH_DBF_QERR_NR_AREAS, - QETH_DBF_QERR_LEN); - qeth_dbf_trace = debug_register(QETH_DBF_TRACE_NAME, - QETH_DBF_TRACE_PAGES, - QETH_DBF_TRACE_NR_AREAS, - QETH_DBF_TRACE_LEN); - - if ((qeth_dbf_setup == NULL) || (qeth_dbf_misc == NULL) || - (qeth_dbf_data == NULL) || (qeth_dbf_control == NULL) || - (qeth_dbf_sense == NULL) || (qeth_dbf_qerr == NULL) || - (qeth_dbf_trace == NULL)) { - qeth_unregister_dbf_views(); - return -ENOMEM; - } - debug_register_view(qeth_dbf_setup, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_setup, QETH_DBF_SETUP_LEVEL); - - debug_register_view(qeth_dbf_misc, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_misc, QETH_DBF_MISC_LEVEL); - - debug_register_view(qeth_dbf_data, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_data, QETH_DBF_DATA_LEVEL); - - debug_register_view(qeth_dbf_control, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_control, QETH_DBF_CONTROL_LEVEL); - - debug_register_view(qeth_dbf_sense, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_sense, QETH_DBF_SENSE_LEVEL); + int ret; + int x; + + for (x = 0; x < QETH_DBF_INFOS; x++) { + /* register the areas */ + qeth_dbf[x].id = debug_register(qeth_dbf[x].name, + qeth_dbf[x].pages, + qeth_dbf[x].areas, + qeth_dbf[x].len); + if (qeth_dbf[x].id == NULL) { + qeth_unregister_dbf_views(); + return -ENOMEM; + } - debug_register_view(qeth_dbf_qerr, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_qerr, QETH_DBF_QERR_LEVEL); + /* register a view */ + ret = debug_register_view(qeth_dbf[x].id, qeth_dbf[x].view); + if (ret) { + qeth_unregister_dbf_views(); + return ret; + } - debug_register_view(qeth_dbf_trace, &debug_hex_ascii_view); - debug_set_level(qeth_dbf_trace, QETH_DBF_TRACE_LEVEL); + /* set a passing level */ + debug_set_level(qeth_dbf[x].id, qeth_dbf[x].level); + } return 0; } @@ -4205,17 +4152,17 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev) int rc; unsigned long flags; - QETH_DBF_TEXT(setup, 2, "probedev"); + QETH_DBF_TEXT(SETUP, 2, "probedev"); dev = &gdev->dev; if (!get_device(dev)) return -ENODEV; - QETH_DBF_TEXT_(setup, 2, "%s", gdev->dev.bus_id); + QETH_DBF_TEXT_(SETUP, 2, "%s", gdev->dev.bus_id); card = qeth_alloc_card(); if (!card) { - QETH_DBF_TEXT_(setup, 2, "1err%d", -ENOMEM); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", -ENOMEM); rc = -ENOMEM; goto err_dev; } @@ -4231,12 +4178,12 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev) rc = qeth_determine_card_type(card); if (rc) { PRINT_WARN("%s: not a valid card type\n", __func__); - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc); goto err_card; } rc = qeth_setup_card(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); goto err_card; } diff --git a/drivers/s390/net/qeth_core_mpc.c b/drivers/s390/net/qeth_core_mpc.c index 441533f2062e..06f4de1f0507 100644 --- a/drivers/s390/net/qeth_core_mpc.c +++ b/drivers/s390/net/qeth_core_mpc.c @@ -230,7 +230,7 @@ static struct ipa_cmd_names qeth_ipa_cmd_names[] = { {IPA_CMD_STARTLAN, "startlan"}, {IPA_CMD_STOPLAN, "stoplan"}, {IPA_CMD_SETVMAC, "setvmac"}, - {IPA_CMD_DELVMAC, "delvmca"}, + {IPA_CMD_DELVMAC, "delvmac"}, {IPA_CMD_SETGMAC, "setgmac"}, {IPA_CMD_DELGMAC, "delgmac"}, {IPA_CMD_SETVLAN, "setvlan"}, diff --git a/drivers/s390/net/qeth_core_offl.c b/drivers/s390/net/qeth_core_offl.c index 8b407d6a83cf..822df8362856 100644 --- a/drivers/s390/net/qeth_core_offl.c +++ b/drivers/s390/net/qeth_core_offl.c @@ -31,7 +31,7 @@ int qeth_eddp_check_buffers_for_context(struct qeth_qdio_out_q *queue, int skbs_in_buffer; int buffers_needed = 0; - QETH_DBF_TEXT(trace, 5, "eddpcbfc"); + QETH_DBF_TEXT(TRACE, 5, "eddpcbfc"); while (elements_needed > 0) { buffers_needed++; if (atomic_read(&queue->bufs[index].state) != @@ -51,7 +51,7 @@ static void qeth_eddp_free_context(struct qeth_eddp_context *ctx) { int i; - QETH_DBF_TEXT(trace, 5, "eddpfctx"); + QETH_DBF_TEXT(TRACE, 5, "eddpfctx"); for (i = 0; i < ctx->num_pages; ++i) free_page((unsigned long)ctx->pages[i]); kfree(ctx->pages); @@ -76,7 +76,7 @@ void qeth_eddp_buf_release_contexts(struct qeth_qdio_out_buffer *buf) { struct qeth_eddp_context_reference *ref; - QETH_DBF_TEXT(trace, 6, "eddprctx"); + QETH_DBF_TEXT(TRACE, 6, "eddprctx"); while (!list_empty(&buf->ctx_list)) { ref = list_entry(buf->ctx_list.next, struct qeth_eddp_context_reference, list); @@ -91,7 +91,7 @@ static int qeth_eddp_buf_ref_context(struct qeth_qdio_out_buffer *buf, { struct qeth_eddp_context_reference *ref; - QETH_DBF_TEXT(trace, 6, "eddprfcx"); + QETH_DBF_TEXT(TRACE, 6, "eddprfcx"); ref = kmalloc(sizeof(struct qeth_eddp_context_reference), GFP_ATOMIC); if (ref == NULL) return -ENOMEM; @@ -112,7 +112,7 @@ int qeth_eddp_fill_buffer(struct qeth_qdio_out_q *queue, int must_refcnt = 1; int i; - QETH_DBF_TEXT(trace, 5, "eddpfibu"); + QETH_DBF_TEXT(TRACE, 5, "eddpfibu"); while (elements > 0) { buf = &queue->bufs[index]; if (atomic_read(&buf->state) != QETH_QDIO_BUF_EMPTY) { @@ -166,7 +166,7 @@ int qeth_eddp_fill_buffer(struct qeth_qdio_out_q *queue, } out_check: if (!queue->do_pack) { - QETH_DBF_TEXT(trace, 6, "fillbfnp"); + QETH_DBF_TEXT(TRACE, 6, "fillbfnp"); /* set state to PRIMED -> will be flushed */ if (buf->next_element_to_fill > 0) { atomic_set(&buf->state, QETH_QDIO_BUF_PRIMED); @@ -175,7 +175,7 @@ out_check: } else { if (queue->card->options.performance_stats) queue->card->perf_stats.skbs_sent_pack++; - QETH_DBF_TEXT(trace, 6, "fillbfpa"); + QETH_DBF_TEXT(TRACE, 6, "fillbfpa"); if (buf->next_element_to_fill >= QETH_MAX_BUFFER_ELEMENTS(queue->card)) { /* @@ -199,7 +199,7 @@ static void qeth_eddp_create_segment_hdrs(struct qeth_eddp_context *ctx, int pkt_len; struct qeth_eddp_element *element; - QETH_DBF_TEXT(trace, 5, "eddpcrsh"); + QETH_DBF_TEXT(TRACE, 5, "eddpcrsh"); page = ctx->pages[ctx->offset >> PAGE_SHIFT]; page_offset = ctx->offset % PAGE_SIZE; element = &ctx->elements[ctx->num_elements]; @@ -257,7 +257,7 @@ static void qeth_eddp_copy_data_tcp(char *dst, struct qeth_eddp_data *eddp, int copy_len; u8 *src; - QETH_DBF_TEXT(trace, 5, "eddpcdtc"); + QETH_DBF_TEXT(TRACE, 5, "eddpcdtc"); if (skb_shinfo(eddp->skb)->nr_frags == 0) { skb_copy_from_linear_data_offset(eddp->skb, eddp->skb_offset, dst, len); @@ -305,7 +305,7 @@ static void qeth_eddp_create_segment_data_tcp(struct qeth_eddp_context *ctx, struct qeth_eddp_element *element; int first_lap = 1; - QETH_DBF_TEXT(trace, 5, "eddpcsdt"); + QETH_DBF_TEXT(TRACE, 5, "eddpcsdt"); page = ctx->pages[ctx->offset >> PAGE_SHIFT]; page_offset = ctx->offset % PAGE_SIZE; element = &ctx->elements[ctx->num_elements]; @@ -346,7 +346,7 @@ static __wsum qeth_eddp_check_tcp4_hdr(struct qeth_eddp_data *eddp, { __wsum phcsum; /* pseudo header checksum */ - QETH_DBF_TEXT(trace, 5, "eddpckt4"); + QETH_DBF_TEXT(TRACE, 5, "eddpckt4"); eddp->th.tcp.h.check = 0; /* compute pseudo header checksum */ phcsum = csum_tcpudp_nofold(eddp->nh.ip4.h.saddr, eddp->nh.ip4.h.daddr, @@ -361,7 +361,7 @@ static __wsum qeth_eddp_check_tcp6_hdr(struct qeth_eddp_data *eddp, __be32 proto; __wsum phcsum; /* pseudo header checksum */ - QETH_DBF_TEXT(trace, 5, "eddpckt6"); + QETH_DBF_TEXT(TRACE, 5, "eddpckt6"); eddp->th.tcp.h.check = 0; /* compute pseudo header checksum */ phcsum = csum_partial((u8 *)&eddp->nh.ip6.h.saddr, @@ -378,7 +378,7 @@ static struct qeth_eddp_data *qeth_eddp_create_eddp_data(struct qeth_hdr *qh, { struct qeth_eddp_data *eddp; - QETH_DBF_TEXT(trace, 5, "eddpcrda"); + QETH_DBF_TEXT(TRACE, 5, "eddpcrda"); eddp = kzalloc(sizeof(struct qeth_eddp_data), GFP_ATOMIC); if (eddp) { eddp->nhl = nhl; @@ -398,7 +398,7 @@ static void __qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, int data_len; __wsum hcsum; - QETH_DBF_TEXT(trace, 5, "eddpftcp"); + QETH_DBF_TEXT(TRACE, 5, "eddpftcp"); eddp->skb_offset = sizeof(struct qeth_hdr) + eddp->nhl + eddp->thl; if (eddp->qh.hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { eddp->skb_offset += sizeof(struct ethhdr); @@ -457,7 +457,7 @@ static int qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, { struct qeth_eddp_data *eddp = NULL; - QETH_DBF_TEXT(trace, 5, "eddpficx"); + QETH_DBF_TEXT(TRACE, 5, "eddpficx"); /* create our segmentation headers and copy original headers */ if (skb->protocol == htons(ETH_P_IP)) eddp = qeth_eddp_create_eddp_data(qhdr, @@ -473,7 +473,7 @@ static int qeth_eddp_fill_context_tcp(struct qeth_eddp_context *ctx, tcp_hdrlen(skb)); if (eddp == NULL) { - QETH_DBF_TEXT(trace, 2, "eddpfcnm"); + QETH_DBF_TEXT(TRACE, 2, "eddpfcnm"); return -ENOMEM; } if (qhdr->hdr.l2.id == QETH_HEADER_TYPE_LAYER2) { @@ -499,7 +499,7 @@ static void qeth_eddp_calc_num_pages(struct qeth_eddp_context *ctx, { int skbs_per_page; - QETH_DBF_TEXT(trace, 5, "eddpcanp"); + QETH_DBF_TEXT(TRACE, 5, "eddpcanp"); /* can we put multiple skbs in one page? */ skbs_per_page = PAGE_SIZE / (skb_shinfo(skb)->gso_size + hdr_len); if (skbs_per_page > 1) { @@ -524,30 +524,30 @@ static struct qeth_eddp_context *qeth_eddp_create_context_generic( u8 *addr; int i; - QETH_DBF_TEXT(trace, 5, "creddpcg"); + QETH_DBF_TEXT(TRACE, 5, "creddpcg"); /* create the context and allocate pages */ ctx = kzalloc(sizeof(struct qeth_eddp_context), GFP_ATOMIC); if (ctx == NULL) { - QETH_DBF_TEXT(trace, 2, "ceddpcn1"); + QETH_DBF_TEXT(TRACE, 2, "ceddpcn1"); return NULL; } ctx->type = QETH_LARGE_SEND_EDDP; qeth_eddp_calc_num_pages(ctx, skb, hdr_len); if (ctx->elements_per_skb > QETH_MAX_BUFFER_ELEMENTS(card)) { - QETH_DBF_TEXT(trace, 2, "ceddpcis"); + QETH_DBF_TEXT(TRACE, 2, "ceddpcis"); kfree(ctx); return NULL; } ctx->pages = kcalloc(ctx->num_pages, sizeof(u8 *), GFP_ATOMIC); if (ctx->pages == NULL) { - QETH_DBF_TEXT(trace, 2, "ceddpcn2"); + QETH_DBF_TEXT(TRACE, 2, "ceddpcn2"); kfree(ctx); return NULL; } for (i = 0; i < ctx->num_pages; ++i) { addr = (u8 *)get_zeroed_page(GFP_ATOMIC); if (addr == NULL) { - QETH_DBF_TEXT(trace, 2, "ceddpcn3"); + QETH_DBF_TEXT(TRACE, 2, "ceddpcn3"); ctx->num_pages = i; qeth_eddp_free_context(ctx); return NULL; @@ -557,7 +557,7 @@ static struct qeth_eddp_context *qeth_eddp_create_context_generic( ctx->elements = kcalloc(ctx->num_elements, sizeof(struct qeth_eddp_element), GFP_ATOMIC); if (ctx->elements == NULL) { - QETH_DBF_TEXT(trace, 2, "ceddpcn4"); + QETH_DBF_TEXT(TRACE, 2, "ceddpcn4"); qeth_eddp_free_context(ctx); return NULL; } @@ -573,7 +573,7 @@ static struct qeth_eddp_context *qeth_eddp_create_context_tcp( { struct qeth_eddp_context *ctx = NULL; - QETH_DBF_TEXT(trace, 5, "creddpct"); + QETH_DBF_TEXT(TRACE, 5, "creddpct"); if (skb->protocol == htons(ETH_P_IP)) ctx = qeth_eddp_create_context_generic(card, skb, (sizeof(struct qeth_hdr) + @@ -584,14 +584,14 @@ static struct qeth_eddp_context *qeth_eddp_create_context_tcp( sizeof(struct qeth_hdr) + sizeof(struct ipv6hdr) + tcp_hdrlen(skb)); else - QETH_DBF_TEXT(trace, 2, "cetcpinv"); + QETH_DBF_TEXT(TRACE, 2, "cetcpinv"); if (ctx == NULL) { - QETH_DBF_TEXT(trace, 2, "creddpnl"); + QETH_DBF_TEXT(TRACE, 2, "creddpnl"); return NULL; } if (qeth_eddp_fill_context_tcp(ctx, skb, qhdr)) { - QETH_DBF_TEXT(trace, 2, "ceddptfe"); + QETH_DBF_TEXT(TRACE, 2, "ceddptfe"); qeth_eddp_free_context(ctx); return NULL; } @@ -603,12 +603,12 @@ struct qeth_eddp_context *qeth_eddp_create_context(struct qeth_card *card, struct sk_buff *skb, struct qeth_hdr *qhdr, unsigned char sk_protocol) { - QETH_DBF_TEXT(trace, 5, "creddpc"); + QETH_DBF_TEXT(TRACE, 5, "creddpc"); switch (sk_protocol) { case IPPROTO_TCP: return qeth_eddp_create_context_tcp(card, skb, qhdr); default: - QETH_DBF_TEXT(trace, 2, "eddpinvp"); + QETH_DBF_TEXT(TRACE, 2, "eddpinvp"); } return NULL; } @@ -622,7 +622,7 @@ void qeth_tso_fill_header(struct qeth_card *card, struct qeth_hdr *qhdr, struct iphdr *iph = ip_hdr(skb); struct ipv6hdr *ip6h = ipv6_hdr(skb); - QETH_DBF_TEXT(trace, 5, "tsofhdr"); + QETH_DBF_TEXT(TRACE, 5, "tsofhdr"); /*fix header to TSO values ...*/ hdr->hdr.hdr.l3.id = QETH_HEADER_TYPE_TSO; diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 0263d9406fcf..3921d1631a78 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -22,16 +22,7 @@ #include "qeth_core.h" #include "qeth_core_offl.h" -#define QETH_DBF_TEXT_(name, level, text...) \ - do { \ - if (qeth_dbf_passes(qeth_dbf_##name, level)) { \ - char *dbf_txt_buf = get_cpu_var(qeth_l2_dbf_txt_buf); \ - sprintf(dbf_txt_buf, text); \ - debug_text_event(qeth_dbf_##name, level, dbf_txt_buf); \ - put_cpu_var(qeth_l2_dbf_txt_buf); \ - } \ - } while (0) - +#define QETH_DBF_TXT_BUF qeth_l2_dbf_txt_buf static DEFINE_PER_CPU(char[256], qeth_l2_dbf_txt_buf); static int qeth_l2_set_offline(struct ccwgroup_device *); @@ -87,7 +78,7 @@ static int qeth_l2_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) rc = -EOPNOTSUPP; } if (rc) - QETH_DBF_TEXT_(trace, 2, "ioce%d", rc); + QETH_DBF_TEXT_(TRACE, 2, "ioce%d", rc); return rc; } @@ -141,7 +132,7 @@ static int qeth_l2_send_setgroupmac_cb(struct qeth_card *card, struct qeth_ipa_cmd *cmd; __u8 *mac; - QETH_DBF_TEXT(trace, 2, "L2Sgmacb"); + QETH_DBF_TEXT(TRACE, 2, "L2Sgmacb"); cmd = (struct qeth_ipa_cmd *) data; mac = &cmd->data.setdelmac.mac[0]; /* MAC already registered, needed in couple/uncouple case */ @@ -162,7 +153,7 @@ static int qeth_l2_send_setgroupmac_cb(struct qeth_card *card, static int qeth_l2_send_setgroupmac(struct qeth_card *card, __u8 *mac) { - QETH_DBF_TEXT(trace, 2, "L2Sgmac"); + QETH_DBF_TEXT(TRACE, 2, "L2Sgmac"); return qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETGMAC, qeth_l2_send_setgroupmac_cb); } @@ -174,7 +165,7 @@ static int qeth_l2_send_delgroupmac_cb(struct qeth_card *card, struct qeth_ipa_cmd *cmd; __u8 *mac; - QETH_DBF_TEXT(trace, 2, "L2Dgmacb"); + QETH_DBF_TEXT(TRACE, 2, "L2Dgmacb"); cmd = (struct qeth_ipa_cmd *) data; mac = &cmd->data.setdelmac.mac[0]; if (cmd->hdr.return_code) @@ -187,7 +178,7 @@ static int qeth_l2_send_delgroupmac_cb(struct qeth_card *card, static int qeth_l2_send_delgroupmac(struct qeth_card *card, __u8 *mac) { - QETH_DBF_TEXT(trace, 2, "L2Dgmac"); + QETH_DBF_TEXT(TRACE, 2, "L2Dgmac"); return qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELGMAC, qeth_l2_send_delgroupmac_cb); } @@ -289,15 +280,15 @@ static int qeth_l2_send_setdelvlan_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 2, "L2sdvcb"); + QETH_DBF_TEXT(TRACE, 2, "L2sdvcb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.return_code) { PRINT_ERR("Error in processing VLAN %i on %s: 0x%x. " "Continuing\n", cmd->data.setdelvlan.vlan_id, QETH_CARD_IFNAME(card), cmd->hdr.return_code); - QETH_DBF_TEXT_(trace, 2, "L2VL%4x", cmd->hdr.command); - QETH_DBF_TEXT_(trace, 2, "L2%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 2, "err%d", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 2, "L2VL%4x", cmd->hdr.command); + QETH_DBF_TEXT_(TRACE, 2, "L2%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(TRACE, 2, "err%d", cmd->hdr.return_code); } return 0; } @@ -308,7 +299,7 @@ static int qeth_l2_send_setdelvlan(struct qeth_card *card, __u16 i, struct qeth_ipa_cmd *cmd; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT_(trace, 4, "L2sdv%x", ipacmd); + QETH_DBF_TEXT_(TRACE, 4, "L2sdv%x", ipacmd); iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); cmd->data.setdelvlan.vlan_id = i; @@ -319,7 +310,7 @@ static int qeth_l2_send_setdelvlan(struct qeth_card *card, __u16 i, static void qeth_l2_process_vlans(struct qeth_card *card, int clear) { struct qeth_vlan_vid *id; - QETH_DBF_TEXT(trace, 3, "L2prcvln"); + QETH_DBF_TEXT(TRACE, 3, "L2prcvln"); spin_lock_bh(&card->vlanlock); list_for_each_entry(id, &card->vid_list, list) { if (clear) @@ -337,7 +328,7 @@ static void qeth_l2_vlan_rx_add_vid(struct net_device *dev, unsigned short vid) struct qeth_card *card = netdev_priv(dev); struct qeth_vlan_vid *id; - QETH_DBF_TEXT_(trace, 4, "aid:%d", vid); + QETH_DBF_TEXT_(TRACE, 4, "aid:%d", vid); id = kmalloc(sizeof(struct qeth_vlan_vid), GFP_ATOMIC); if (id) { id->vid = vid; @@ -355,7 +346,7 @@ static void qeth_l2_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) struct qeth_vlan_vid *id, *tmpid = NULL; struct qeth_card *card = netdev_priv(dev); - QETH_DBF_TEXT_(trace, 4, "kid:%d", vid); + QETH_DBF_TEXT_(TRACE, 4, "kid:%d", vid); spin_lock_bh(&card->vlanlock); list_for_each_entry(id, &card->vid_list, list) { if (id->vid == vid) { @@ -376,8 +367,8 @@ static int qeth_l2_stop_card(struct qeth_card *card, int recovery_mode) { int rc = 0; - QETH_DBF_TEXT(setup , 2, "stopcard"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP , 2, "stopcard"); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); qeth_set_allowed_threads(card, 0, 1); if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) @@ -396,7 +387,7 @@ static int qeth_l2_stop_card(struct qeth_card *card, int recovery_mode) if (!card->use_hard_stop) { __u8 *mac = &card->dev->dev_addr[0]; rc = qeth_l2_send_delmac(card, mac); - QETH_DBF_TEXT_(setup, 2, "Lerr%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "Lerr%d", rc); } card->state = CARD_STATE_SOFTSETUP; } @@ -465,8 +456,8 @@ static void qeth_l2_process_inbound_buffer(struct qeth_card *card, break; default: dev_kfree_skb_any(skb); - QETH_DBF_TEXT(trace, 3, "inbunkno"); - QETH_DBF_HEX(control, 3, hdr, QETH_DBF_CONTROL_LEN); + QETH_DBF_TEXT(TRACE, 3, "inbunkno"); + QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN); continue; } card->dev->last_rx = jiffies; @@ -484,7 +475,7 @@ static int qeth_l2_send_setdelmac(struct qeth_card *card, __u8 *mac, struct qeth_ipa_cmd *cmd; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(trace, 2, "L2sdmac"); + QETH_DBF_TEXT(TRACE, 2, "L2sdmac"); iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); cmd->data.setdelmac.mac_length = OSA_ADDR_LEN; @@ -498,10 +489,10 @@ static int qeth_l2_send_setmac_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 2, "L2Smaccb"); + QETH_DBF_TEXT(TRACE, 2, "L2Smaccb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 2, "L2er%x", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 2, "L2er%x", cmd->hdr.return_code); card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; cmd->hdr.return_code = -EIO; } else { @@ -520,7 +511,7 @@ static int qeth_l2_send_setmac_cb(struct qeth_card *card, static int qeth_l2_send_setmac(struct qeth_card *card, __u8 *mac) { - QETH_DBF_TEXT(trace, 2, "L2Setmac"); + QETH_DBF_TEXT(TRACE, 2, "L2Setmac"); return qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC, qeth_l2_send_setmac_cb); } @@ -531,10 +522,10 @@ static int qeth_l2_send_delmac_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 2, "L2Dmaccb"); + QETH_DBF_TEXT(TRACE, 2, "L2Dmaccb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 2, "err%d", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 2, "err%d", cmd->hdr.return_code); cmd->hdr.return_code = -EIO; return 0; } @@ -545,7 +536,7 @@ static int qeth_l2_send_delmac_cb(struct qeth_card *card, static int qeth_l2_send_delmac(struct qeth_card *card, __u8 *mac) { - QETH_DBF_TEXT(trace, 2, "L2Delmac"); + QETH_DBF_TEXT(TRACE, 2, "L2Delmac"); if (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) return 0; return qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELVMAC, @@ -557,8 +548,8 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card) int rc = 0; char vendor_pre[] = {0x02, 0x00, 0x00}; - QETH_DBF_TEXT(setup, 2, "doL2init"); - QETH_DBF_TEXT_(setup, 2, "doL2%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT(SETUP, 2, "doL2init"); + QETH_DBF_TEXT_(SETUP, 2, "doL2%s", CARD_BUS_ID(card)); rc = qeth_query_setadapterparms(card); if (rc) { @@ -572,10 +563,10 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card) PRINT_WARN("couldn't get MAC address on " "device %s: x%x\n", CARD_BUS_ID(card), rc); - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return rc; } - QETH_DBF_HEX(setup, 2, card->dev->dev_addr, OSA_ADDR_LEN); + QETH_DBF_HEX(SETUP, 2, card->dev->dev_addr, OSA_ADDR_LEN); } else { random_ether_addr(card->dev->dev_addr); memcpy(card->dev->dev_addr, vendor_pre, 3); @@ -589,21 +580,21 @@ static int qeth_l2_set_mac_address(struct net_device *dev, void *p) struct qeth_card *card = netdev_priv(dev); int rc = 0; - QETH_DBF_TEXT(trace, 3, "setmac"); + QETH_DBF_TEXT(TRACE, 3, "setmac"); if (qeth_l2_verify_dev(dev) != QETH_REAL_CARD) { - QETH_DBF_TEXT(trace, 3, "setmcINV"); + QETH_DBF_TEXT(TRACE, 3, "setmcINV"); return -EOPNOTSUPP; } if (card->info.type == QETH_CARD_TYPE_OSN) { PRINT_WARN("Setting MAC address on %s is not supported.\n", dev->name); - QETH_DBF_TEXT(trace, 3, "setmcOSN"); + QETH_DBF_TEXT(TRACE, 3, "setmcOSN"); return -EOPNOTSUPP; } - QETH_DBF_TEXT_(trace, 3, "%s", CARD_BUS_ID(card)); - QETH_DBF_HEX(trace, 3, addr->sa_data, OSA_ADDR_LEN); + QETH_DBF_TEXT_(TRACE, 3, "%s", CARD_BUS_ID(card)); + QETH_DBF_HEX(TRACE, 3, addr->sa_data, OSA_ADDR_LEN); rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]); if (!rc) rc = qeth_l2_send_setmac(card, addr->sa_data); @@ -618,7 +609,7 @@ static void qeth_l2_set_multicast_list(struct net_device *dev) if (card->info.type == QETH_CARD_TYPE_OSN) return ; - QETH_DBF_TEXT(trace, 3, "setmulti"); + QETH_DBF_TEXT(TRACE, 3, "setmulti"); qeth_l2_del_all_mc(card); spin_lock_bh(&card->mclock); for (dm = dev->mc_list; dm; dm = dm->next) @@ -644,7 +635,7 @@ static int qeth_l2_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) enum qeth_large_send_types large_send = QETH_LARGE_SEND_NO; struct qeth_eddp_context *ctx = NULL; - QETH_DBF_TEXT(trace, 6, "l2xmit"); + QETH_DBF_TEXT(TRACE, 6, "l2xmit"); if ((card->state != CARD_STATE_UP) || !card->lan_online) { card->stats.tx_carrier_errors++; @@ -756,7 +747,7 @@ static void qeth_l2_qdio_input_handler(struct ccw_device *ccwdev, int index; int i; - QETH_DBF_TEXT(trace, 6, "qdinput"); + QETH_DBF_TEXT(TRACE, 6, "qdinput"); card = (struct qeth_card *) card_ptr; net_dev = card->dev; if (card->options.performance_stats) { @@ -765,11 +756,11 @@ static void qeth_l2_qdio_input_handler(struct ccw_device *ccwdev, } if (status & QDIO_STATUS_LOOK_FOR_ERROR) { if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION) { - QETH_DBF_TEXT(trace, 1, "qdinchk"); - QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 1, "%04X%04X", first_element, + QETH_DBF_TEXT(TRACE, 1, "qdinchk"); + QETH_DBF_TEXT_(TRACE, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(TRACE, 1, "%04X%04X", first_element, count); - QETH_DBF_TEXT_(trace, 1, "%04X%04X", queue, status); + QETH_DBF_TEXT_(TRACE, 1, "%04X%04X", queue, status); qeth_schedule_recovery(card); return; } @@ -794,13 +785,13 @@ static int qeth_l2_open(struct net_device *dev) { struct qeth_card *card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 4, "qethopen"); + QETH_DBF_TEXT(TRACE, 4, "qethopen"); if (card->state != CARD_STATE_SOFTSETUP) return -ENODEV; if ((card->info.type != QETH_CARD_TYPE_OSN) && (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))) { - QETH_DBF_TEXT(trace, 4, "nomacadr"); + QETH_DBF_TEXT(TRACE, 4, "nomacadr"); return -EPERM; } card->data.state = CH_STATE_UP; @@ -818,7 +809,7 @@ static int qeth_l2_stop(struct net_device *dev) { struct qeth_card *card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 4, "qethstop"); + QETH_DBF_TEXT(TRACE, 4, "qethstop"); netif_tx_disable(dev); card->dev->flags &= ~IFF_UP; if (card->state == CARD_STATE_UP) @@ -934,8 +925,8 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode) enum qeth_card_states recover_flag; BUG_ON(!card); - QETH_DBF_TEXT(setup, 2, "setonlin"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 2, "setonlin"); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); qeth_set_allowed_threads(card, QETH_RECOVER_THREAD, 1); if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) { @@ -947,23 +938,23 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode) recover_flag = card->state; rc = ccw_device_set_online(CARD_RDEV(card)); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return -EIO; } rc = ccw_device_set_online(CARD_WDEV(card)); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return -EIO; } rc = ccw_device_set_online(CARD_DDEV(card)); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return -EIO; } rc = qeth_core_hardsetup_card(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); goto out_remove; } @@ -977,11 +968,11 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode) qeth_print_status_message(card); /* softsetup */ - QETH_DBF_TEXT(setup, 2, "softsetp"); + QETH_DBF_TEXT(SETUP, 2, "softsetp"); rc = qeth_send_startlan(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); if (rc == 0xe080) { PRINT_WARN("LAN on card %s if offline! " "Waiting for STARTLAN from card.\n", @@ -1001,7 +992,7 @@ static int __qeth_l2_set_online(struct ccwgroup_device *gdev, int recovery_mode) rc = qeth_init_qdio_queues(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "6err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc); goto out_remove; } card->state = CARD_STATE_SOFTSETUP; @@ -1048,8 +1039,8 @@ static int __qeth_l2_set_offline(struct ccwgroup_device *cgdev, int rc = 0, rc2 = 0, rc3 = 0; enum qeth_card_states recover_flag; - QETH_DBF_TEXT(setup, 3, "setoffl"); - QETH_DBF_HEX(setup, 3, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 3, "setoffl"); + QETH_DBF_HEX(SETUP, 3, &card, sizeof(void *)); if (card->dev && netif_carrier_ok(card->dev)) netif_carrier_off(card->dev); @@ -1065,7 +1056,7 @@ static int __qeth_l2_set_offline(struct ccwgroup_device *cgdev, if (!rc) rc = (rc2) ? rc2 : rc3; if (rc) - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); if (recover_flag == CARD_STATE_UP) card->state = CARD_STATE_RECOVER; /* let user_space know that device is offline */ @@ -1084,11 +1075,11 @@ static int qeth_l2_recover(void *ptr) int rc = 0; card = (struct qeth_card *) ptr; - QETH_DBF_TEXT(trace, 2, "recover1"); - QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(TRACE, 2, "recover1"); + QETH_DBF_HEX(TRACE, 2, &card, sizeof(void *)); if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD)) return 0; - QETH_DBF_TEXT(trace, 2, "recover2"); + QETH_DBF_TEXT(TRACE, 2, "recover2"); PRINT_WARN("Recovery of device %s started ...\n", CARD_BUS_ID(card)); card->use_hard_stop = 1; @@ -1139,12 +1130,12 @@ static int qeth_osn_send_control_data(struct qeth_card *card, int len, unsigned long flags; int rc = 0; - QETH_DBF_TEXT(trace, 5, "osndctrd"); + QETH_DBF_TEXT(TRACE, 5, "osndctrd"); wait_event(card->wait_q, atomic_cmpxchg(&card->write.irq_pending, 0, 1) == 0); qeth_prepare_control_data(card, len, iob); - QETH_DBF_TEXT(trace, 6, "osnoirqp"); + QETH_DBF_TEXT(TRACE, 6, "osnoirqp"); spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, (addr_t) iob, 0, 0); @@ -1152,7 +1143,7 @@ static int qeth_osn_send_control_data(struct qeth_card *card, int len, if (rc) { PRINT_WARN("qeth_osn_send_control_data: " "ccw_device_start rc = %i\n", rc); - QETH_DBF_TEXT_(trace, 2, " err%d", rc); + QETH_DBF_TEXT_(TRACE, 2, " err%d", rc); qeth_release_buffer(iob->channel, iob); atomic_set(&card->write.irq_pending, 0); wake_up(&card->wait_q); @@ -1165,7 +1156,7 @@ static int qeth_osn_send_ipa_cmd(struct qeth_card *card, { u16 s1, s2; - QETH_DBF_TEXT(trace, 4, "osndipa"); + QETH_DBF_TEXT(TRACE, 4, "osndipa"); qeth_prepare_ipa_cmd(card, iob, QETH_PROT_OSN2); s1 = (u16)(IPA_PDU_HEADER_SIZE + data_len); @@ -1183,7 +1174,7 @@ int qeth_osn_assist(struct net_device *dev, void *data, int data_len) struct qeth_card *card; int rc; - QETH_DBF_TEXT(trace, 2, "osnsdmc"); + QETH_DBF_TEXT(TRACE, 2, "osnsdmc"); if (!dev) return -ENODEV; card = netdev_priv(dev); @@ -1205,7 +1196,7 @@ int qeth_osn_register(unsigned char *read_dev_no, struct net_device **dev, { struct qeth_card *card; - QETH_DBF_TEXT(trace, 2, "osnreg"); + QETH_DBF_TEXT(TRACE, 2, "osnreg"); *dev = qeth_l2_netdev_by_devno(read_dev_no); if (*dev == NULL) return -ENODEV; @@ -1224,7 +1215,7 @@ void qeth_osn_deregister(struct net_device *dev) { struct qeth_card *card; - QETH_DBF_TEXT(trace, 2, "osndereg"); + QETH_DBF_TEXT(TRACE, 2, "osndereg"); if (!dev) return; card = netdev_priv(dev); diff --git a/drivers/s390/net/qeth_l3.h b/drivers/s390/net/qeth_l3.h index f639cc3af22b..1be353593a59 100644 --- a/drivers/s390/net/qeth_l3.h +++ b/drivers/s390/net/qeth_l3.h @@ -13,16 +13,7 @@ #include "qeth_core.h" -#define QETH_DBF_TEXT_(name, level, text...) \ - do { \ - if (qeth_dbf_passes(qeth_dbf_##name, level)) { \ - char *dbf_txt_buf = get_cpu_var(qeth_l3_dbf_txt_buf); \ - sprintf(dbf_txt_buf, text); \ - debug_text_event(qeth_dbf_##name, level, dbf_txt_buf); \ - put_cpu_var(qeth_l3_dbf_txt_buf); \ - } \ - } while (0) - +#define QETH_DBF_TXT_BUF qeth_l3_dbf_txt_buf DECLARE_PER_CPU(char[256], qeth_l3_dbf_txt_buf); struct qeth_ipaddr { diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index e95220a2638d..c5e90eecae45 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -259,7 +259,7 @@ static int __qeth_l3_insert_ip_todo(struct qeth_card *card, addr->users += add ? 1 : -1; if (add && (addr->type == QETH_IP_TYPE_NORMAL) && qeth_l3_is_addr_covered_by_ipato(card, addr)) { - QETH_DBF_TEXT(trace, 2, "tkovaddr"); + QETH_DBF_TEXT(TRACE, 2, "tkovaddr"); addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG; } list_add_tail(&addr->entry, card->ip_tbd_list); @@ -273,13 +273,13 @@ static int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *addr) unsigned long flags; int rc = 0; - QETH_DBF_TEXT(trace, 4, "delip"); + QETH_DBF_TEXT(TRACE, 4, "delip"); if (addr->proto == QETH_PROT_IPV4) - QETH_DBF_HEX(trace, 4, &addr->u.a4.addr, 4); + QETH_DBF_HEX(TRACE, 4, &addr->u.a4.addr, 4); else { - QETH_DBF_HEX(trace, 4, &addr->u.a6.addr, 8); - QETH_DBF_HEX(trace, 4, ((char *)&addr->u.a6.addr) + 8, 8); + QETH_DBF_HEX(TRACE, 4, &addr->u.a6.addr, 8); + QETH_DBF_HEX(TRACE, 4, ((char *)&addr->u.a6.addr) + 8, 8); } spin_lock_irqsave(&card->ip_lock, flags); rc = __qeth_l3_insert_ip_todo(card, addr, 0); @@ -292,12 +292,12 @@ static int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *addr) unsigned long flags; int rc = 0; - QETH_DBF_TEXT(trace, 4, "addip"); + QETH_DBF_TEXT(TRACE, 4, "addip"); if (addr->proto == QETH_PROT_IPV4) - QETH_DBF_HEX(trace, 4, &addr->u.a4.addr, 4); + QETH_DBF_HEX(TRACE, 4, &addr->u.a4.addr, 4); else { - QETH_DBF_HEX(trace, 4, &addr->u.a6.addr, 8); - QETH_DBF_HEX(trace, 4, ((char *)&addr->u.a6.addr) + 8, 8); + QETH_DBF_HEX(TRACE, 4, &addr->u.a6.addr, 8); + QETH_DBF_HEX(TRACE, 4, ((char *)&addr->u.a6.addr) + 8, 8); } spin_lock_irqsave(&card->ip_lock, flags); rc = __qeth_l3_insert_ip_todo(card, addr, 1); @@ -326,10 +326,10 @@ static void qeth_l3_delete_mc_addresses(struct qeth_card *card) struct qeth_ipaddr *iptodo; unsigned long flags; - QETH_DBF_TEXT(trace, 4, "delmc"); + QETH_DBF_TEXT(TRACE, 4, "delmc"); iptodo = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); if (!iptodo) { - QETH_DBF_TEXT(trace, 2, "dmcnomem"); + QETH_DBF_TEXT(TRACE, 2, "dmcnomem"); return; } iptodo->type = QETH_IP_TYPE_DEL_ALL_MC; @@ -430,14 +430,14 @@ static void qeth_l3_set_ip_addr_list(struct qeth_card *card) unsigned long flags; int rc; - QETH_DBF_TEXT(trace, 2, "sdiplist"); - QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(TRACE, 2, "sdiplist"); + QETH_DBF_HEX(TRACE, 2, &card, sizeof(void *)); spin_lock_irqsave(&card->ip_lock, flags); tbd_list = card->ip_tbd_list; card->ip_tbd_list = kmalloc(sizeof(struct list_head), GFP_ATOMIC); if (!card->ip_tbd_list) { - QETH_DBF_TEXT(trace, 0, "silnomem"); + QETH_DBF_TEXT(TRACE, 0, "silnomem"); card->ip_tbd_list = tbd_list; spin_unlock_irqrestore(&card->ip_lock, flags); return; @@ -488,7 +488,7 @@ static void qeth_l3_clear_ip_list(struct qeth_card *card, int clean, struct qeth_ipaddr *addr, *tmp; unsigned long flags; - QETH_DBF_TEXT(trace, 4, "clearip"); + QETH_DBF_TEXT(TRACE, 4, "clearip"); spin_lock_irqsave(&card->ip_lock, flags); /* clear todo list */ list_for_each_entry_safe(addr, tmp, card->ip_tbd_list, entry) { @@ -546,7 +546,7 @@ static int qeth_l3_send_setdelmc(struct qeth_card *card, struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "setdelmc"); + QETH_DBF_TEXT(TRACE, 4, "setdelmc"); iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); @@ -584,8 +584,8 @@ static int qeth_l3_send_setdelip(struct qeth_card *card, struct qeth_ipa_cmd *cmd; __u8 netmask[16]; - QETH_DBF_TEXT(trace, 4, "setdelip"); - QETH_DBF_TEXT_(trace, 4, "flags%02X", flags); + QETH_DBF_TEXT(TRACE, 4, "setdelip"); + QETH_DBF_TEXT_(TRACE, 4, "flags%02X", flags); iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); @@ -614,7 +614,7 @@ static int qeth_l3_send_setrouting(struct qeth_card *card, struct qeth_ipa_cmd *cmd; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(trace, 4, "setroutg"); + QETH_DBF_TEXT(TRACE, 4, "setroutg"); iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETRTG, prot); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); cmd->data.setrtg.type = (type); @@ -667,7 +667,7 @@ int qeth_l3_setrouting_v4(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "setrtg4"); + QETH_DBF_TEXT(TRACE, 3, "setrtg4"); qeth_l3_correct_routing_type(card, &card->options.route4.type, QETH_PROT_IPV4); @@ -687,7 +687,7 @@ int qeth_l3_setrouting_v6(struct qeth_card *card) { int rc = 0; - QETH_DBF_TEXT(trace, 3, "setrtg6"); + QETH_DBF_TEXT(TRACE, 3, "setrtg6"); #ifdef CONFIG_QETH_IPV6 if (!qeth_is_supported(card, IPA_IPV6)) @@ -731,7 +731,7 @@ int qeth_l3_add_ipato_entry(struct qeth_card *card, unsigned long flags; int rc = 0; - QETH_DBF_TEXT(trace, 2, "addipato"); + QETH_DBF_TEXT(TRACE, 2, "addipato"); spin_lock_irqsave(&card->ip_lock, flags); list_for_each_entry(ipatoe, &card->ipato.entries, entry) { if (ipatoe->proto != new->proto) @@ -757,7 +757,7 @@ void qeth_l3_del_ipato_entry(struct qeth_card *card, struct qeth_ipato_entry *ipatoe, *tmp; unsigned long flags; - QETH_DBF_TEXT(trace, 2, "delipato"); + QETH_DBF_TEXT(TRACE, 2, "delipato"); spin_lock_irqsave(&card->ip_lock, flags); list_for_each_entry_safe(ipatoe, tmp, &card->ipato.entries, entry) { if (ipatoe->proto != proto) @@ -785,11 +785,11 @@ int qeth_l3_add_vipa(struct qeth_card *card, enum qeth_prot_versions proto, ipaddr = qeth_l3_get_addr_buffer(proto); if (ipaddr) { if (proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2, "addvipa4"); + QETH_DBF_TEXT(TRACE, 2, "addvipa4"); memcpy(&ipaddr->u.a4.addr, addr, 4); ipaddr->u.a4.mask = 0; } else if (proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "addvipa6"); + QETH_DBF_TEXT(TRACE, 2, "addvipa6"); memcpy(&ipaddr->u.a6.addr, addr, 16); ipaddr->u.a6.pfxlen = 0; } @@ -821,11 +821,11 @@ void qeth_l3_del_vipa(struct qeth_card *card, enum qeth_prot_versions proto, ipaddr = qeth_l3_get_addr_buffer(proto); if (ipaddr) { if (proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2, "delvipa4"); + QETH_DBF_TEXT(TRACE, 2, "delvipa4"); memcpy(&ipaddr->u.a4.addr, addr, 4); ipaddr->u.a4.mask = 0; } else if (proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "delvipa6"); + QETH_DBF_TEXT(TRACE, 2, "delvipa6"); memcpy(&ipaddr->u.a6.addr, addr, 16); ipaddr->u.a6.pfxlen = 0; } @@ -850,11 +850,11 @@ int qeth_l3_add_rxip(struct qeth_card *card, enum qeth_prot_versions proto, ipaddr = qeth_l3_get_addr_buffer(proto); if (ipaddr) { if (proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2, "addrxip4"); + QETH_DBF_TEXT(TRACE, 2, "addrxip4"); memcpy(&ipaddr->u.a4.addr, addr, 4); ipaddr->u.a4.mask = 0; } else if (proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "addrxip6"); + QETH_DBF_TEXT(TRACE, 2, "addrxip6"); memcpy(&ipaddr->u.a6.addr, addr, 16); ipaddr->u.a6.pfxlen = 0; } @@ -886,11 +886,11 @@ void qeth_l3_del_rxip(struct qeth_card *card, enum qeth_prot_versions proto, ipaddr = qeth_l3_get_addr_buffer(proto); if (ipaddr) { if (proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2, "addrxip4"); + QETH_DBF_TEXT(TRACE, 2, "addrxip4"); memcpy(&ipaddr->u.a4.addr, addr, 4); ipaddr->u.a4.mask = 0; } else if (proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "addrxip6"); + QETH_DBF_TEXT(TRACE, 2, "addrxip6"); memcpy(&ipaddr->u.a6.addr, addr, 16); ipaddr->u.a6.pfxlen = 0; } @@ -910,15 +910,15 @@ static int qeth_l3_register_addr_entry(struct qeth_card *card, int cnt = 3; if (addr->proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2, "setaddr4"); - QETH_DBF_HEX(trace, 3, &addr->u.a4.addr, sizeof(int)); + QETH_DBF_TEXT(TRACE, 2, "setaddr4"); + QETH_DBF_HEX(TRACE, 3, &addr->u.a4.addr, sizeof(int)); } else if (addr->proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "setaddr6"); - QETH_DBF_HEX(trace, 3, &addr->u.a6.addr, 8); - QETH_DBF_HEX(trace, 3, ((char *)&addr->u.a6.addr) + 8, 8); + QETH_DBF_TEXT(TRACE, 2, "setaddr6"); + QETH_DBF_HEX(TRACE, 3, &addr->u.a6.addr, 8); + QETH_DBF_HEX(TRACE, 3, ((char *)&addr->u.a6.addr) + 8, 8); } else { - QETH_DBF_TEXT(trace, 2, "setaddr?"); - QETH_DBF_HEX(trace, 3, addr, sizeof(struct qeth_ipaddr)); + QETH_DBF_TEXT(TRACE, 2, "setaddr?"); + QETH_DBF_HEX(TRACE, 3, addr, sizeof(struct qeth_ipaddr)); } do { if (addr->is_multicast) @@ -927,10 +927,10 @@ static int qeth_l3_register_addr_entry(struct qeth_card *card, rc = qeth_l3_send_setdelip(card, addr, IPA_CMD_SETIP, addr->set_flags); if (rc) - QETH_DBF_TEXT(trace, 2, "failed"); + QETH_DBF_TEXT(TRACE, 2, "failed"); } while ((--cnt > 0) && rc); if (rc) { - QETH_DBF_TEXT(trace, 2, "FAILED"); + QETH_DBF_TEXT(TRACE, 2, "FAILED"); qeth_l3_ipaddr_to_string(addr->proto, (u8 *)&addr->u, buf); PRINT_WARN("Could not register IP address %s (rc=0x%x/%d)\n", buf, rc, rc); @@ -944,15 +944,15 @@ static int qeth_l3_deregister_addr_entry(struct qeth_card *card, int rc = 0; if (addr->proto == QETH_PROT_IPV4) { - QETH_DBF_TEXT(trace, 2, "deladdr4"); - QETH_DBF_HEX(trace, 3, &addr->u.a4.addr, sizeof(int)); + QETH_DBF_TEXT(TRACE, 2, "deladdr4"); + QETH_DBF_HEX(TRACE, 3, &addr->u.a4.addr, sizeof(int)); } else if (addr->proto == QETH_PROT_IPV6) { - QETH_DBF_TEXT(trace, 2, "deladdr6"); - QETH_DBF_HEX(trace, 3, &addr->u.a6.addr, 8); - QETH_DBF_HEX(trace, 3, ((char *)&addr->u.a6.addr) + 8, 8); + QETH_DBF_TEXT(TRACE, 2, "deladdr6"); + QETH_DBF_HEX(TRACE, 3, &addr->u.a6.addr, 8); + QETH_DBF_HEX(TRACE, 3, ((char *)&addr->u.a6.addr) + 8, 8); } else { - QETH_DBF_TEXT(trace, 2, "deladdr?"); - QETH_DBF_HEX(trace, 3, addr, sizeof(struct qeth_ipaddr)); + QETH_DBF_TEXT(TRACE, 2, "deladdr?"); + QETH_DBF_HEX(TRACE, 3, addr, sizeof(struct qeth_ipaddr)); } if (addr->is_multicast) rc = qeth_l3_send_setdelmc(card, addr, IPA_CMD_DELIPM); @@ -960,7 +960,7 @@ static int qeth_l3_deregister_addr_entry(struct qeth_card *card, rc = qeth_l3_send_setdelip(card, addr, IPA_CMD_DELIP, addr->del_flags); if (rc) { - QETH_DBF_TEXT(trace, 2, "failed"); + QETH_DBF_TEXT(TRACE, 2, "failed"); /* TODO: re-activate this warning as soon as we have a * clean mirco code qeth_ipaddr_to_string(addr->proto, (u8 *)&addr->u, buf); @@ -1000,7 +1000,7 @@ static int qeth_l3_send_setadp_mode(struct qeth_card *card, __u32 command, struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "adpmode"); + QETH_DBF_TEXT(TRACE, 4, "adpmode"); iob = qeth_get_adapter_cmd(card, command, sizeof(struct qeth_ipacmd_setadpparms)); @@ -1015,7 +1015,7 @@ static int qeth_l3_setadapter_hstr(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 4, "adphstr"); + QETH_DBF_TEXT(TRACE, 4, "adphstr"); if (qeth_adp_supported(card, IPA_SETADP_SET_BROADCAST_MODE)) { rc = qeth_l3_send_setadp_mode(card, @@ -1048,13 +1048,13 @@ static int qeth_l3_setadapter_parms(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(setup, 2, "setadprm"); + QETH_DBF_TEXT(SETUP, 2, "setadprm"); if (!qeth_is_supported(card, IPA_SETADAPTERPARMS)) { PRINT_WARN("set adapter parameters not supported " "on device %s.\n", CARD_BUS_ID(card)); - QETH_DBF_TEXT(setup, 2, " notsupp"); + QETH_DBF_TEXT(SETUP, 2, " notsupp"); return 0; } rc = qeth_query_setadapterparms(card); @@ -1083,7 +1083,7 @@ static int qeth_l3_default_setassparms_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "defadpcb"); + QETH_DBF_TEXT(TRACE, 4, "defadpcb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.return_code == 0) { @@ -1096,7 +1096,7 @@ static int qeth_l3_default_setassparms_cb(struct qeth_card *card, if (cmd->data.setassparms.hdr.assist_no == IPA_INBOUND_CHECKSUM && cmd->data.setassparms.hdr.command_code == IPA_CMD_ASS_START) { card->info.csum_mask = cmd->data.setassparms.data.flags_32bit; - QETH_DBF_TEXT_(trace, 3, "csum:%d", card->info.csum_mask); + QETH_DBF_TEXT_(TRACE, 3, "csum:%d", card->info.csum_mask); } return 0; } @@ -1108,7 +1108,7 @@ static struct qeth_cmd_buffer *qeth_l3_get_setassparms_cmd( struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "getasscm"); + QETH_DBF_TEXT(TRACE, 4, "getasscm"); iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETASSPARMS, prot); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); @@ -1130,7 +1130,7 @@ static int qeth_l3_send_setassparms(struct qeth_card *card, int rc; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 4, "sendassp"); + QETH_DBF_TEXT(TRACE, 4, "sendassp"); cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); if (len <= sizeof(__u32)) @@ -1149,7 +1149,7 @@ static int qeth_l3_send_simple_setassparms_ipv6(struct qeth_card *card, int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(trace, 4, "simassp6"); + QETH_DBF_TEXT(TRACE, 4, "simassp6"); iob = qeth_l3_get_setassparms_cmd(card, ipa_func, cmd_code, 0, QETH_PROT_IPV6); rc = qeth_l3_send_setassparms(card, iob, 0, 0, @@ -1165,7 +1165,7 @@ static int qeth_l3_send_simple_setassparms(struct qeth_card *card, int length = 0; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT(trace, 4, "simassp4"); + QETH_DBF_TEXT(TRACE, 4, "simassp4"); if (data) length = sizeof(__u32); iob = qeth_l3_get_setassparms_cmd(card, ipa_func, cmd_code, @@ -1179,7 +1179,7 @@ static int qeth_l3_start_ipa_arp_processing(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "ipaarp"); + QETH_DBF_TEXT(TRACE, 3, "ipaarp"); if (!qeth_is_supported(card, IPA_ARP_PROCESSING)) { PRINT_WARN("ARP processing not supported " @@ -1200,7 +1200,7 @@ static int qeth_l3_start_ipa_ip_fragmentation(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "ipaipfrg"); + QETH_DBF_TEXT(TRACE, 3, "ipaipfrg"); if (!qeth_is_supported(card, IPA_IP_FRAGMENTATION)) { PRINT_INFO("Hardware IP fragmentation not supported on %s\n", @@ -1223,7 +1223,7 @@ static int qeth_l3_start_ipa_source_mac(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "stsrcmac"); + QETH_DBF_TEXT(TRACE, 3, "stsrcmac"); if (!card->options.fake_ll) return -EOPNOTSUPP; @@ -1247,7 +1247,7 @@ static int qeth_l3_start_ipa_vlan(struct qeth_card *card) { int rc = 0; - QETH_DBF_TEXT(trace, 3, "strtvlan"); + QETH_DBF_TEXT(TRACE, 3, "strtvlan"); if (!qeth_is_supported(card, IPA_FULL_VLAN)) { PRINT_WARN("VLAN not supported on %s\n", @@ -1271,7 +1271,7 @@ static int qeth_l3_start_ipa_multicast(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "stmcast"); + QETH_DBF_TEXT(TRACE, 3, "stmcast"); if (!qeth_is_supported(card, IPA_MULTICASTING)) { PRINT_WARN("Multicast not supported on %s\n", @@ -1297,7 +1297,7 @@ static int qeth_l3_query_ipassists_cb(struct qeth_card *card, { struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(setup, 2, "qipasscb"); + QETH_DBF_TEXT(SETUP, 2, "qipasscb"); cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.prot_version == QETH_PROT_IPV4) { @@ -1307,9 +1307,9 @@ static int qeth_l3_query_ipassists_cb(struct qeth_card *card, card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported; card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; } - QETH_DBF_TEXT(setup, 2, "suppenbl"); - QETH_DBF_TEXT_(setup, 2, "%x", cmd->hdr.ipa_supported); - QETH_DBF_TEXT_(setup, 2, "%x", cmd->hdr.ipa_enabled); + QETH_DBF_TEXT(SETUP, 2, "suppenbl"); + QETH_DBF_TEXT_(SETUP, 2, "%x", cmd->hdr.ipa_supported); + QETH_DBF_TEXT_(SETUP, 2, "%x", cmd->hdr.ipa_enabled); return 0; } @@ -1319,7 +1319,7 @@ static int qeth_l3_query_ipassists(struct qeth_card *card, int rc; struct qeth_cmd_buffer *iob; - QETH_DBF_TEXT_(setup, 2, "qipassi%i", prot); + QETH_DBF_TEXT_(SETUP, 2, "qipassi%i", prot); iob = qeth_get_ipacmd_buffer(card, IPA_CMD_QIPASSIST, prot); rc = qeth_send_ipa_cmd(card, iob, qeth_l3_query_ipassists_cb, NULL); return rc; @@ -1330,7 +1330,7 @@ static int qeth_l3_softsetup_ipv6(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "softipv6"); + QETH_DBF_TEXT(TRACE, 3, "softipv6"); if (card->info.type == QETH_CARD_TYPE_IQD) goto out; @@ -1375,7 +1375,7 @@ static int qeth_l3_start_ipa_ipv6(struct qeth_card *card) { int rc = 0; - QETH_DBF_TEXT(trace, 3, "strtipv6"); + QETH_DBF_TEXT(TRACE, 3, "strtipv6"); if (!qeth_is_supported(card, IPA_IPV6)) { PRINT_WARN("IPv6 not supported on %s\n", @@ -1392,7 +1392,7 @@ static int qeth_l3_start_ipa_broadcast(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "stbrdcst"); + QETH_DBF_TEXT(TRACE, 3, "stbrdcst"); card->info.broadcast_capable = 0; if (!qeth_is_supported(card, IPA_FILTERING)) { PRINT_WARN("Broadcast not supported on %s\n", @@ -1462,7 +1462,7 @@ static int qeth_l3_start_ipa_checksum(struct qeth_card *card) { int rc = 0; - QETH_DBF_TEXT(trace, 3, "strtcsum"); + QETH_DBF_TEXT(TRACE, 3, "strtcsum"); if (card->options.checksum_type == NO_CHECKSUMMING) { PRINT_WARN("Using no checksumming on %s.\n", @@ -1493,7 +1493,7 @@ static int qeth_l3_start_ipa_tso(struct qeth_card *card) { int rc; - QETH_DBF_TEXT(trace, 3, "sttso"); + QETH_DBF_TEXT(TRACE, 3, "sttso"); if (!qeth_is_supported(card, IPA_OUTBOUND_TSO)) { PRINT_WARN("Outbound TSO not supported on %s\n", @@ -1518,7 +1518,7 @@ static int qeth_l3_start_ipa_tso(struct qeth_card *card) static int qeth_l3_start_ipassists(struct qeth_card *card) { - QETH_DBF_TEXT(trace, 3, "strtipas"); + QETH_DBF_TEXT(TRACE, 3, "strtipas"); qeth_l3_start_ipa_arp_processing(card); /* go on*/ qeth_l3_start_ipa_ip_fragmentation(card); /* go on*/ qeth_l3_start_ipa_source_mac(card); /* go on*/ @@ -1538,7 +1538,7 @@ static int qeth_l3_put_unique_id(struct qeth_card *card) struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(trace, 2, "puniqeid"); + QETH_DBF_TEXT(TRACE, 2, "puniqeid"); if ((card->info.unique_id & UNIQUE_ID_NOT_BY_CARD) == UNIQUE_ID_NOT_BY_CARD) @@ -1575,7 +1575,7 @@ static int qeth_l3_iqd_read_initial_mac(struct qeth_card *card) struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(setup, 2, "hsrmac"); + QETH_DBF_TEXT(SETUP, 2, "hsrmac"); iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR, QETH_PROT_IPV6); @@ -1616,7 +1616,7 @@ static int qeth_l3_get_unique_id(struct qeth_card *card) struct qeth_cmd_buffer *iob; struct qeth_ipa_cmd *cmd; - QETH_DBF_TEXT(setup, 2, "guniqeid"); + QETH_DBF_TEXT(SETUP, 2, "guniqeid"); if (!qeth_is_supported(card, IPA_IPV6)) { card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED | @@ -1649,7 +1649,7 @@ static void qeth_l3_add_mc(struct qeth_card *card, struct in_device *in4_dev) struct ip_mc_list *im4; char buf[MAX_ADDR_LEN]; - QETH_DBF_TEXT(trace, 4, "addmc"); + QETH_DBF_TEXT(TRACE, 4, "addmc"); for (im4 = in4_dev->mc_list; im4; im4 = im4->next) { qeth_l3_get_mac_for_ipm(im4->multiaddr, buf, in4_dev->dev); ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); @@ -1669,7 +1669,7 @@ static void qeth_l3_add_vlan_mc(struct qeth_card *card) struct vlan_group *vg; int i; - QETH_DBF_TEXT(trace, 4, "addmcvl"); + QETH_DBF_TEXT(TRACE, 4, "addmcvl"); if (!qeth_is_supported(card, IPA_FULL_VLAN) || (card->vlangrp == NULL)) return; @@ -1693,7 +1693,7 @@ static void qeth_l3_add_multicast_ipv4(struct qeth_card *card) { struct in_device *in4_dev; - QETH_DBF_TEXT(trace, 4, "chkmcv4"); + QETH_DBF_TEXT(TRACE, 4, "chkmcv4"); in4_dev = in_dev_get(card->dev); if (in4_dev == NULL) return; @@ -1711,7 +1711,7 @@ static void qeth_l3_add_mc6(struct qeth_card *card, struct inet6_dev *in6_dev) struct ifmcaddr6 *im6; char buf[MAX_ADDR_LEN]; - QETH_DBF_TEXT(trace, 4, "addmc6"); + QETH_DBF_TEXT(TRACE, 4, "addmc6"); for (im6 = in6_dev->mc_list; im6 != NULL; im6 = im6->next) { ndisc_mc_map(&im6->mca_addr, buf, in6_dev->dev, 0); ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV6); @@ -1732,7 +1732,7 @@ static void qeth_l3_add_vlan_mc6(struct qeth_card *card) struct vlan_group *vg; int i; - QETH_DBF_TEXT(trace, 4, "admc6vl"); + QETH_DBF_TEXT(TRACE, 4, "admc6vl"); if (!qeth_is_supported(card, IPA_FULL_VLAN) || (card->vlangrp == NULL)) return; @@ -1756,7 +1756,7 @@ static void qeth_l3_add_multicast_ipv6(struct qeth_card *card) { struct inet6_dev *in6_dev; - QETH_DBF_TEXT(trace, 4, "chkmcv6"); + QETH_DBF_TEXT(TRACE, 4, "chkmcv6"); if (!qeth_is_supported(card, IPA_IPV6)) return ; in6_dev = in6_dev_get(card->dev); @@ -1777,7 +1777,7 @@ static void qeth_l3_free_vlan_addresses4(struct qeth_card *card, struct in_ifaddr *ifa; struct qeth_ipaddr *addr; - QETH_DBF_TEXT(trace, 4, "frvaddr4"); + QETH_DBF_TEXT(TRACE, 4, "frvaddr4"); in_dev = in_dev_get(vlan_group_get_device(card->vlangrp, vid)); if (!in_dev) @@ -1803,7 +1803,7 @@ static void qeth_l3_free_vlan_addresses6(struct qeth_card *card, struct inet6_ifaddr *ifa; struct qeth_ipaddr *addr; - QETH_DBF_TEXT(trace, 4, "frvaddr6"); + QETH_DBF_TEXT(TRACE, 4, "frvaddr6"); in6_dev = in6_dev_get(vlan_group_get_device(card->vlangrp, vid)); if (!in6_dev) @@ -1838,7 +1838,7 @@ static void qeth_l3_vlan_rx_register(struct net_device *dev, struct qeth_card *card = netdev_priv(dev); unsigned long flags; - QETH_DBF_TEXT(trace, 4, "vlanreg"); + QETH_DBF_TEXT(TRACE, 4, "vlanreg"); spin_lock_irqsave(&card->vlanlock, flags); card->vlangrp = grp; spin_unlock_irqrestore(&card->vlanlock, flags); @@ -1876,7 +1876,7 @@ static void qeth_l3_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) struct qeth_card *card = netdev_priv(dev); unsigned long flags; - QETH_DBF_TEXT_(trace, 4, "kid:%d", vid); + QETH_DBF_TEXT_(TRACE, 4, "kid:%d", vid); spin_lock_irqsave(&card->vlanlock, flags); /* unregister IP addresses of vlan device */ qeth_l3_free_vlan_addresses(card, vid); @@ -2006,8 +2006,8 @@ static void qeth_l3_process_inbound_buffer(struct qeth_card *card, break; default: dev_kfree_skb_any(skb); - QETH_DBF_TEXT(trace, 3, "inbunkno"); - QETH_DBF_HEX(control, 3, hdr, QETH_DBF_CONTROL_LEN); + QETH_DBF_TEXT(TRACE, 3, "inbunkno"); + QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN); continue; } @@ -2074,7 +2074,7 @@ static struct qeth_card *qeth_l3_get_card_from_dev(struct net_device *dev) card = netdev_priv(vlan_dev_info(dev)->real_dev); if (card->options.layer2) card = NULL; - QETH_DBF_TEXT_(trace, 4, "%d", rc); + QETH_DBF_TEXT_(TRACE, 4, "%d", rc); return card ; } @@ -2082,8 +2082,8 @@ static int qeth_l3_stop_card(struct qeth_card *card, int recovery_mode) { int rc = 0; - QETH_DBF_TEXT(setup, 2, "stopcard"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 2, "stopcard"); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); qeth_set_allowed_threads(card, 0, 1); if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) @@ -2096,7 +2096,7 @@ static int qeth_l3_stop_card(struct qeth_card *card, int recovery_mode) if (!card->use_hard_stop) { rc = qeth_send_stoplan(card); if (rc) - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); } card->state = CARD_STATE_SOFTSETUP; } @@ -2110,7 +2110,7 @@ static int qeth_l3_stop_card(struct qeth_card *card, int recovery_mode) (card->info.type != QETH_CARD_TYPE_IQD)) { rc = qeth_l3_put_unique_id(card); if (rc) - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); } qeth_qdio_clear_card(card, 0); qeth_clear_qdio_buffers(card); @@ -2129,7 +2129,7 @@ static void qeth_l3_set_multicast_list(struct net_device *dev) { struct qeth_card *card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 3, "setmulti"); + QETH_DBF_TEXT(TRACE, 3, "setmulti"); qeth_l3_delete_mc_addresses(card); qeth_l3_add_multicast_ipv4(card); #ifdef CONFIG_QETH_IPV6 @@ -2169,7 +2169,7 @@ static int qeth_l3_arp_set_no_entries(struct qeth_card *card, int no_entries) int tmp; int rc; - QETH_DBF_TEXT(trace, 3, "arpstnoe"); + QETH_DBF_TEXT(TRACE, 3, "arpstnoe"); /* * currently GuestLAN only supports the ARP assist function @@ -2223,17 +2223,17 @@ static int qeth_l3_arp_query_cb(struct qeth_card *card, int uentry_size; int i; - QETH_DBF_TEXT(trace, 4, "arpquecb"); + QETH_DBF_TEXT(TRACE, 4, "arpquecb"); qinfo = (struct qeth_arp_query_info *) reply->param; cmd = (struct qeth_ipa_cmd *) data; if (cmd->hdr.return_code) { - QETH_DBF_TEXT_(trace, 4, "qaer1%i", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 4, "qaer1%i", cmd->hdr.return_code); return 0; } if (cmd->data.setassparms.hdr.return_code) { cmd->hdr.return_code = cmd->data.setassparms.hdr.return_code; - QETH_DBF_TEXT_(trace, 4, "qaer2%i", cmd->hdr.return_code); + QETH_DBF_TEXT_(TRACE, 4, "qaer2%i", cmd->hdr.return_code); return 0; } qdata = &cmd->data.setassparms.data.query_arp; @@ -2255,17 +2255,17 @@ static int qeth_l3_arp_query_cb(struct qeth_card *card, /* check if there is enough room in userspace */ if ((qinfo->udata_len - qinfo->udata_offset) < qdata->no_entries * uentry_size){ - QETH_DBF_TEXT_(trace, 4, "qaer3%i", -ENOMEM); + QETH_DBF_TEXT_(TRACE, 4, "qaer3%i", -ENOMEM); cmd->hdr.return_code = -ENOMEM; PRINT_WARN("query ARP user space buffer is too small for " "the returned number of ARP entries. " "Aborting query!\n"); goto out_error; } - QETH_DBF_TEXT_(trace, 4, "anore%i", + QETH_DBF_TEXT_(TRACE, 4, "anore%i", cmd->data.setassparms.hdr.number_of_replies); - QETH_DBF_TEXT_(trace, 4, "aseqn%i", cmd->data.setassparms.hdr.seq_no); - QETH_DBF_TEXT_(trace, 4, "anoen%i", qdata->no_entries); + QETH_DBF_TEXT_(TRACE, 4, "aseqn%i", cmd->data.setassparms.hdr.seq_no); + QETH_DBF_TEXT_(TRACE, 4, "anoen%i", qdata->no_entries); if (qinfo->mask_bits & QETH_QARP_STRIP_ENTRIES) { /* strip off "media specific information" */ @@ -2301,7 +2301,7 @@ static int qeth_l3_send_ipa_arp_cmd(struct qeth_card *card, unsigned long), void *reply_param) { - QETH_DBF_TEXT(trace, 4, "sendarp"); + QETH_DBF_TEXT(TRACE, 4, "sendarp"); memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE); memcpy(QETH_IPA_CMD_DEST_ADDR(iob->data), @@ -2317,7 +2317,7 @@ static int qeth_l3_arp_query(struct qeth_card *card, char __user *udata) int tmp; int rc; - QETH_DBF_TEXT(trace, 3, "arpquery"); + QETH_DBF_TEXT(TRACE, 3, "arpquery"); if (!qeth_is_supported(card,/*IPA_QUERY_ARP_ADDR_INFO*/ IPA_ARP_PROCESSING)) { @@ -2362,7 +2362,7 @@ static int qeth_l3_arp_add_entry(struct qeth_card *card, int tmp; int rc; - QETH_DBF_TEXT(trace, 3, "arpadent"); + QETH_DBF_TEXT(TRACE, 3, "arpadent"); /* * currently GuestLAN only supports the ARP assist function @@ -2404,7 +2404,7 @@ static int qeth_l3_arp_remove_entry(struct qeth_card *card, int tmp; int rc; - QETH_DBF_TEXT(trace, 3, "arprment"); + QETH_DBF_TEXT(TRACE, 3, "arprment"); /* * currently GuestLAN only supports the ARP assist function @@ -2443,7 +2443,7 @@ static int qeth_l3_arp_flush_cache(struct qeth_card *card) int rc; int tmp; - QETH_DBF_TEXT(trace, 3, "arpflush"); + QETH_DBF_TEXT(TRACE, 3, "arpflush"); /* * currently GuestLAN only supports the ARP assist function @@ -2552,14 +2552,14 @@ static int qeth_l3_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) rc = -EOPNOTSUPP; } if (rc) - QETH_DBF_TEXT_(trace, 2, "ioce%d", rc); + QETH_DBF_TEXT_(TRACE, 2, "ioce%d", rc); return rc; } static void qeth_l3_fill_header(struct qeth_card *card, struct qeth_hdr *hdr, struct sk_buff *skb, int ipv, int cast_type) { - QETH_DBF_TEXT(trace, 6, "fillhdr"); + QETH_DBF_TEXT(TRACE, 6, "fillhdr"); memset(hdr, 0, sizeof(struct qeth_hdr)); hdr->hdr.l3.id = QETH_HEADER_TYPE_LAYER3; @@ -2638,7 +2638,7 @@ static int qeth_l3_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) enum qeth_large_send_types large_send = QETH_LARGE_SEND_NO; struct qeth_eddp_context *ctx = NULL; - QETH_DBF_TEXT(trace, 6, "l3xmit"); + QETH_DBF_TEXT(TRACE, 6, "l3xmit"); if ((card->info.type == QETH_CARD_TYPE_IQD) && (skb->protocol != htons(ETH_P_IPV6)) && @@ -2799,7 +2799,7 @@ static int qeth_l3_open(struct net_device *dev) { struct qeth_card *card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 4, "qethopen"); + QETH_DBF_TEXT(TRACE, 4, "qethopen"); if (card->state != CARD_STATE_SOFTSETUP) return -ENODEV; card->data.state = CH_STATE_UP; @@ -2816,7 +2816,7 @@ static int qeth_l3_stop(struct net_device *dev) { struct qeth_card *card = netdev_priv(dev); - QETH_DBF_TEXT(trace, 4, "qethstop"); + QETH_DBF_TEXT(TRACE, 4, "qethstop"); netif_tx_disable(dev); card->dev->flags &= ~IFF_UP; if (card->state == CARD_STATE_UP) @@ -2982,7 +2982,7 @@ static void qeth_l3_qdio_input_handler(struct ccw_device *ccwdev, int index; int i; - QETH_DBF_TEXT(trace, 6, "qdinput"); + QETH_DBF_TEXT(TRACE, 6, "qdinput"); card = (struct qeth_card *) card_ptr; net_dev = card->dev; if (card->options.performance_stats) { @@ -2991,11 +2991,11 @@ static void qeth_l3_qdio_input_handler(struct ccw_device *ccwdev, } if (status & QDIO_STATUS_LOOK_FOR_ERROR) { if (status & QDIO_STATUS_ACTIVATE_CHECK_CONDITION) { - QETH_DBF_TEXT(trace, 1, "qdinchk"); - QETH_DBF_TEXT_(trace, 1, "%s", CARD_BUS_ID(card)); - QETH_DBF_TEXT_(trace, 1, "%04X%04X", + QETH_DBF_TEXT(TRACE, 1, "qdinchk"); + QETH_DBF_TEXT_(TRACE, 1, "%s", CARD_BUS_ID(card)); + QETH_DBF_TEXT_(TRACE, 1, "%04X%04X", first_element, count); - QETH_DBF_TEXT_(trace, 1, "%04X%04X", queue, status); + QETH_DBF_TEXT_(TRACE, 1, "%04X%04X", queue, status); qeth_schedule_recovery(card); return; } @@ -3059,8 +3059,8 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode) enum qeth_card_states recover_flag; BUG_ON(!card); - QETH_DBF_TEXT(setup, 2, "setonlin"); - QETH_DBF_HEX(setup, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 2, "setonlin"); + QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *)); qeth_set_allowed_threads(card, QETH_RECOVER_THREAD, 1); if (qeth_wait_for_threads(card, ~QETH_RECOVER_THREAD)) { @@ -3072,23 +3072,23 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode) recover_flag = card->state; rc = ccw_device_set_online(CARD_RDEV(card)); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return -EIO; } rc = ccw_device_set_online(CARD_WDEV(card)); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return -EIO; } rc = ccw_device_set_online(CARD_DDEV(card)); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); return -EIO; } rc = qeth_core_hardsetup_card(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); goto out_remove; } @@ -3101,11 +3101,11 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode) qeth_print_status_message(card); /* softsetup */ - QETH_DBF_TEXT(setup, 2, "softsetp"); + QETH_DBF_TEXT(SETUP, 2, "softsetp"); rc = qeth_send_startlan(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); if (rc == 0xe080) { PRINT_WARN("LAN on card %s if offline! " "Waiting for STARTLAN from card.\n", @@ -3119,21 +3119,21 @@ static int __qeth_l3_set_online(struct ccwgroup_device *gdev, int recovery_mode) rc = qeth_l3_setadapter_parms(card); if (rc) - QETH_DBF_TEXT_(setup, 2, "2err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); rc = qeth_l3_start_ipassists(card); if (rc) - QETH_DBF_TEXT_(setup, 2, "3err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc); rc = qeth_l3_setrouting_v4(card); if (rc) - QETH_DBF_TEXT_(setup, 2, "4err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "4err%d", rc); rc = qeth_l3_setrouting_v6(card); if (rc) - QETH_DBF_TEXT_(setup, 2, "5err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc); netif_tx_disable(card->dev); rc = qeth_init_qdio_queues(card); if (rc) { - QETH_DBF_TEXT_(setup, 2, "6err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc); goto out_remove; } card->state = CARD_STATE_SOFTSETUP; @@ -3172,8 +3172,8 @@ static int __qeth_l3_set_offline(struct ccwgroup_device *cgdev, int rc = 0, rc2 = 0, rc3 = 0; enum qeth_card_states recover_flag; - QETH_DBF_TEXT(setup, 3, "setoffl"); - QETH_DBF_HEX(setup, 3, &card, sizeof(void *)); + QETH_DBF_TEXT(SETUP, 3, "setoffl"); + QETH_DBF_HEX(SETUP, 3, &card, sizeof(void *)); if (card->dev && netif_carrier_ok(card->dev)) netif_carrier_off(card->dev); @@ -3189,7 +3189,7 @@ static int __qeth_l3_set_offline(struct ccwgroup_device *cgdev, if (!rc) rc = (rc2) ? rc2 : rc3; if (rc) - QETH_DBF_TEXT_(setup, 2, "1err%d", rc); + QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); if (recover_flag == CARD_STATE_UP) card->state = CARD_STATE_RECOVER; /* let user_space know that device is offline */ @@ -3208,11 +3208,11 @@ static int qeth_l3_recover(void *ptr) int rc = 0; card = (struct qeth_card *) ptr; - QETH_DBF_TEXT(trace, 2, "recover1"); - QETH_DBF_HEX(trace, 2, &card, sizeof(void *)); + QETH_DBF_TEXT(TRACE, 2, "recover1"); + QETH_DBF_HEX(TRACE, 2, &card, sizeof(void *)); if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD)) return 0; - QETH_DBF_TEXT(trace, 2, "recover2"); + QETH_DBF_TEXT(TRACE, 2, "recover2"); PRINT_WARN("Recovery of device %s started ...\n", CARD_BUS_ID(card)); card->use_hard_stop = 1; @@ -3258,7 +3258,7 @@ static int qeth_l3_ip_event(struct notifier_block *this, if (dev_net(dev) != &init_net) return NOTIFY_DONE; - QETH_DBF_TEXT(trace, 3, "ipevent"); + QETH_DBF_TEXT(TRACE, 3, "ipevent"); card = qeth_l3_get_card_from_dev(dev); if (!card) return NOTIFY_DONE; @@ -3305,7 +3305,7 @@ static int qeth_l3_ip6_event(struct notifier_block *this, struct qeth_ipaddr *addr; struct qeth_card *card; - QETH_DBF_TEXT(trace, 3, "ip6event"); + QETH_DBF_TEXT(TRACE, 3, "ip6event"); card = qeth_l3_get_card_from_dev(dev); if (!card) @@ -3348,7 +3348,7 @@ static int qeth_l3_register_notifiers(void) { int rc; - QETH_DBF_TEXT(trace, 5, "regnotif"); + QETH_DBF_TEXT(TRACE, 5, "regnotif"); rc = register_inetaddr_notifier(&qeth_l3_ip_notifier); if (rc) return rc; @@ -3367,7 +3367,7 @@ static int qeth_l3_register_notifiers(void) static void qeth_l3_unregister_notifiers(void) { - QETH_DBF_TEXT(trace, 5, "unregnot"); + QETH_DBF_TEXT(TRACE, 5, "unregnot"); BUG_ON(unregister_inetaddr_notifier(&qeth_l3_ip_notifier)); #ifdef CONFIG_QETH_IPV6 BUG_ON(unregister_inet6addr_notifier(&qeth_l3_ip6_notifier)); -- cgit v1.2.3-59-g8ed1b From b403e685b7c57f7912bae36987433e72c616f418 Mon Sep 17 00:00:00 2001 From: Frank Blaschka Date: Tue, 1 Apr 2008 10:26:59 +0200 Subject: qeth: core code should alloc headroom for LLC protocol Allocate headroom for TR_HLEN but using only ETH_HLEN causes rx performance degradation. Allocate ETH_HLEN for ethernet and TR_HLEN for token ring (layer 3 mode). Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_core_main.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 5cfe0ef7719a..055f5c3e7b56 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -4002,7 +4002,11 @@ struct sk_buff *qeth_core_get_next_skb(struct qeth_card *card, } } else { skb_len = (*hdr)->hdr.l3.length; - headroom = max((int)ETH_HLEN, (int)TR_HLEN); + if ((card->info.link_type == QETH_LINK_TYPE_LANE_TR) || + (card->info.link_type == QETH_LINK_TYPE_HSTR)) + headroom = TR_HLEN; + else + headroom = ETH_HLEN; } if (!skb_len) -- cgit v1.2.3-59-g8ed1b From 3caa4af834df519fda0f1ea6af4a5c7abfec98c7 Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Tue, 1 Apr 2008 10:27:00 +0200 Subject: qeth: keep ip-address after LAN_OFFLINE failure Problem: If setting of an ip-address fails with LAN_OFFLINE, qeth does not save the ip-address in its internal list of set ip-addresses. qeth recovers after a following STARTLAN event, but cannot set the unsaved ip-address. Solution: save the ip-address in the qeth-maintained list of ip-addresses after a LAN_OFFLINE failure for SETIP. Signed-off-by: Ursula Braun Signed-off-by: Frank Blaschka Signed-off-by: Jeff Garzik --- drivers/s390/net/qeth_l3_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index c5e90eecae45..e1bfe56087d6 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -461,7 +461,7 @@ static void qeth_l3_set_ip_addr_list(struct qeth_card *card) spin_unlock_irqrestore(&card->ip_lock, flags); rc = qeth_l3_register_addr_entry(card, todo); spin_lock_irqsave(&card->ip_lock, flags); - if (!rc) + if (!rc || (rc == IPA_RC_LAN_OFFLINE)) list_add_tail(&todo->entry, &card->ip_list); else kfree(todo); -- cgit v1.2.3-59-g8ed1b From 08a8a0c59e54f7eb80897c1e77efa4a541d11008 Mon Sep 17 00:00:00 2001 From: Josef 'Jeff' Sipek Date: Thu, 17 Apr 2008 07:45:56 +0200 Subject: [S390] dasd: fix double elevator_exit call when deadline iosched fails to load I compiled the kernel without deadline, and the dasd code exits the old scheduler (CFQ), fails to load the new one (deadline), and then things just hang - with one of these (sorry about the weird chars - I copy & pasted it from a 3270 console): dasd(eckd): 0.0.0151: 3390/0A(CU:3990/01) Cyl:3338 Head:15 Sec:224 ------------ cut here ------------ Badness at kernel/mutex.c:134 Modules linked in: dasd_eckd_mod dasd_mod CPU: 0 Not tainted 2.6.25-rc3 #9 Process exe (pid: 538, task: 000000000d172000, ksp: 000000000d21ef88) Krnl PSW : 0404000180000000 000000000022fb5c (mutex_lock_nested+0x2a4/0x2cc) R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:0 CC:0 PM:0 EA:3 Krnl GPRS: 0000000000024218 000000000076fc78 0000000000000000 000000000000000f 000000000022f92e 0000000000449898 000000000f921c00 000003e000162590 00000000001539c4 000000000d172000 070000007fffffff 000000000d21f400 000000000f8f2560 00000000002413f8 000000000022fb44 000000000d21f400 Krnl Code: 000000000022fb50: bf2f1000 icm %r2,15,0(%r1) 000000000022fb54: a774fef6 brc 7,22f940 000000000022fb58: a7f40001 brc 15,22fb5a >000000000022fb5c: a7f4fef2 brc 15,22f940 000000000022fb60: c0e5fffa112a brasl %r14,171db4 000000000022fb66: 1222 ltr %r2,%r2 000000000022fb68: a784fedb brc 8,22f91e 000000000022fb6c: c010002a0086 larl %r1,76fc78 Call Trace: (<000000000022f92e> mutex_lock_nested+0x76/0x2cc) <00000000001539c4> elevator_exit+0x38/0x80 <0000000000156ffe> blk_cleanup_queue+0x62/0x7c <000003e0001d5414> dasd_change_state+0xe0/0x8ec <000003e0001d5cae> dasd_set_target_state+0x8e/0x9c <000003e0001d5f74> dasd_generic_set_online+0x160/0x284 <000003e00011e83a> dasd_eckd_set_online+0x2e/0x40 <0000000000199bf4> ccw_device_set_online+0x170/0x2c0 <0000000000199d9e> online_store_recog_and_online+0x5a/0x14c <000000000019a08a> online_store+0xbe/0x2ec <000000000018456c> dev_attr_store+0x38/0x58 <000000000010efbc> sysfs_write_file+0x130/0x190 <00000000000af582> vfs_write+0xb2/0x160 <00000000000afc7c> sys_write+0x54/0x9c <0000000000025e16> sys32_write+0x2e/0x50 <0000000000024218> sysc_noemu+0x10/0x16 <0000000077e82bd2> 0x77e82bd2 Set elevator pointer to NULL in order to avoid double elevator_exit calls when elevator_init call for deadline iosched fails. Also make sure the dasd device driver depends on IOSCHED_DEADLINE so the default IO scheduler of the dasd driver is present. Signed-off-by: Josef 'Jeff' Sipek Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/block/Kconfig | 1 + drivers/s390/block/dasd.c | 1 + 2 files changed, 2 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/block/Kconfig b/drivers/s390/block/Kconfig index e879b212cf43..07883197f474 100644 --- a/drivers/s390/block/Kconfig +++ b/drivers/s390/block/Kconfig @@ -20,6 +20,7 @@ config DCSSBLK config DASD tristate "Support for DASD devices" depends on CCW && BLOCK + select IOSCHED_DEADLINE help Enable this option if you want to access DASDs directly utilizing S/390s channel subsystem commands. This is necessary for running diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index ccf46c96adb4..54f686d2c694 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -1956,6 +1956,7 @@ static int dasd_alloc_queue(struct dasd_block *block) block->request_queue->queuedata = block; elevator_exit(block->request_queue->elevator); + block->request_queue->elevator = NULL; rc = elevator_init(block->request_queue, "deadline"); if (rc) { blk_cleanup_queue(block->request_queue); -- cgit v1.2.3-59-g8ed1b From 22806dc1a8ffd88a7c7bdd070879e6e323db496a Mon Sep 17 00:00:00 2001 From: Cornelia Huck Date: Thu, 17 Apr 2008 07:45:59 +0200 Subject: [S390] cio: Fix race for "fast" path gone/path back situations. Make sure we wait for previous evaluations triggered by path state changes to have settled before we manipulate path states again. Signed-off-by: Cornelia Huck Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/cio/chsc.c | 12 ++++++++++-- drivers/s390/cio/css.c | 6 ++++++ drivers/s390/cio/css.h | 1 + 3 files changed, 17 insertions(+), 2 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c index 007aaeb4f532..b6a40c20780d 100644 --- a/drivers/s390/cio/chsc.c +++ b/drivers/s390/cio/chsc.c @@ -217,6 +217,8 @@ void chsc_chp_offline(struct chp_id chpid) if (chp_get_status(chpid) <= 0) return; + /* Wait until previous actions have settled. */ + css_wait_for_slow_path(); for_each_subchannel_staged(s390_subchannel_remove_chpid, NULL, &chpid); } @@ -303,7 +305,8 @@ static void s390_process_res_acc (struct res_acc_data *res_data) sprintf(dbf_txt, "fla%x", res_data->fla); CIO_TRACE_EVENT( 2, dbf_txt); } - + /* Wait until previous actions have settled. */ + css_wait_for_slow_path(); /* * I/O resources may have become accessible. * Scan through all subchannels that may be concerned and @@ -561,9 +564,12 @@ void chsc_chp_online(struct chp_id chpid) sprintf(dbf_txt, "cadd%x.%02x", chpid.cssid, chpid.id); CIO_TRACE_EVENT(2, dbf_txt); - if (chp_get_status(chpid) != 0) + if (chp_get_status(chpid) != 0) { + /* Wait until previous actions have settled. */ + css_wait_for_slow_path(); for_each_subchannel_staged(__chp_add, __chp_add_new_sch, &chpid); + } } static void __s390_subchannel_vary_chpid(struct subchannel *sch, @@ -650,6 +656,8 @@ __s390_vary_chpid_on(struct subchannel_id schid, void *data) */ int chsc_chp_vary(struct chp_id chpid, int on) { + /* Wait until previous actions have settled. */ + css_wait_for_slow_path(); /* * Redo PathVerification on the devices the chpid connects to */ diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c index 3b45bbe6cce0..3e829c827493 100644 --- a/drivers/s390/cio/css.c +++ b/drivers/s390/cio/css.c @@ -533,6 +533,12 @@ void css_schedule_eval_all(void) spin_unlock_irqrestore(&slow_subchannel_lock, flags); } +void css_wait_for_slow_path(void) +{ + flush_workqueue(ccw_device_notify_work); + flush_workqueue(slow_path_wq); +} + /* Reprobe subchannel if unregistered. */ static int reprobe_subchannel(struct subchannel_id schid, void *data) { diff --git a/drivers/s390/cio/css.h b/drivers/s390/cio/css.h index b70554523552..e1913518f354 100644 --- a/drivers/s390/cio/css.h +++ b/drivers/s390/cio/css.h @@ -144,6 +144,7 @@ struct schib; int css_sch_is_valid(struct schib *); extern struct workqueue_struct *slow_path_wq; +void css_wait_for_slow_path(void); extern struct attribute_group *subch_attr_groups[]; #endif -- cgit v1.2.3-59-g8ed1b From fe6173d9b33dba18ec462051750fb1b9abcd796d Mon Sep 17 00:00:00 2001 From: Cornelia Huck Date: Thu, 17 Apr 2008 07:46:00 +0200 Subject: [S390] cio: Trigger verification on device/path not operational. Currently, we don't do much on no path or no device situations during normal user I/O, since we rely on reports regarding those events by the machine. If we trigger a path verification to bring our device state up-to-date, we (a) may recover from path failures earlier and (b) better handle situations where the hardware/hypervisor doesn't give us enough notifications. Signed-off-by: Cornelia Huck Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/cio/device_ops.c | 9 ++++++++- drivers/s390/cio/device_status.c | 6 +++++- 2 files changed, 13 insertions(+), 2 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c index 49b58eb0fab8..a1718a0aa539 100644 --- a/drivers/s390/cio/device_ops.c +++ b/drivers/s390/cio/device_ops.c @@ -193,8 +193,15 @@ int ccw_device_start_key(struct ccw_device *cdev, struct ccw1 *cpa, return -EACCES; } ret = cio_start_key (sch, cpa, lpm, key); - if (ret == 0) + switch (ret) { + case 0: cdev->private->intparm = intparm; + break; + case -EACCES: + case -ENODEV: + dev_fsm_event(cdev, DEV_EVENT_VERIFY); + break; + } return ret; } diff --git a/drivers/s390/cio/device_status.c b/drivers/s390/cio/device_status.c index ebe0848cfe33..4764b9e00b9e 100644 --- a/drivers/s390/cio/device_status.c +++ b/drivers/s390/cio/device_status.c @@ -312,6 +312,7 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb) { struct subchannel *sch; struct ccw1 *sense_ccw; + int rc; sch = to_subchannel(cdev->dev.parent); @@ -337,7 +338,10 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb) /* Reset internal retry indication. */ cdev->private->flags.intretry = 0; - return cio_start(sch, sense_ccw, 0xff); + rc = cio_start(sch, sense_ccw, 0xff); + if (rc == -ENODEV || rc == -EACCES) + dev_fsm_event(cdev, DEV_EVENT_VERIFY); + return rc; } /* -- cgit v1.2.3-59-g8ed1b From 8284fb19efa1f11ea8dd213e9e227fc1fcb20586 Mon Sep 17 00:00:00 2001 From: Michael Ernst Date: Thu, 17 Apr 2008 07:46:01 +0200 Subject: [S390] cio: fix parallel cm_enable processing. It is now possible to trigger cm_enable processing several times in parallel without causing a kernel panic. Signed-off-by: Michael Ernst Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/cio/chsc.c | 3 --- drivers/s390/cio/css.c | 10 +++++++++- 2 files changed, 9 insertions(+), 4 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c index b6a40c20780d..5de86908b0d0 100644 --- a/drivers/s390/cio/chsc.c +++ b/drivers/s390/cio/chsc.c @@ -766,7 +766,6 @@ chsc_secm(struct channel_subsystem *css, int enable) if (!secm_area) return -ENOMEM; - mutex_lock(&css->mutex); if (enable && !css->cm_enabled) { css->cub_addr1 = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); css->cub_addr2 = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); @@ -774,7 +773,6 @@ chsc_secm(struct channel_subsystem *css, int enable) free_page((unsigned long)css->cub_addr1); free_page((unsigned long)css->cub_addr2); free_page((unsigned long)secm_area); - mutex_unlock(&css->mutex); return -ENOMEM; } } @@ -795,7 +793,6 @@ chsc_secm(struct channel_subsystem *css, int enable) free_page((unsigned long)css->cub_addr1); free_page((unsigned long)css->cub_addr2); } - mutex_unlock(&css->mutex); free_page((unsigned long)secm_area); return ret; } diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c index 3e829c827493..c1afab5f72d6 100644 --- a/drivers/s390/cio/css.c +++ b/drivers/s390/cio/css.c @@ -689,10 +689,14 @@ css_cm_enable_show(struct device *dev, struct device_attribute *attr, char *buf) { struct channel_subsystem *css = to_css(dev); + int ret; if (!css) return 0; - return sprintf(buf, "%x\n", css->cm_enabled); + mutex_lock(&css->mutex); + ret = sprintf(buf, "%x\n", css->cm_enabled); + mutex_unlock(&css->mutex); + return ret; } static ssize_t @@ -702,6 +706,7 @@ css_cm_enable_store(struct device *dev, struct device_attribute *attr, struct channel_subsystem *css = to_css(dev); int ret; + mutex_lock(&css->mutex); switch (buf[0]) { case '0': ret = css->cm_enabled ? chsc_secm(css, 0) : 0; @@ -712,6 +717,7 @@ css_cm_enable_store(struct device *dev, struct device_attribute *attr, default: ret = -EINVAL; } + mutex_unlock(&css->mutex); return ret < 0 ? ret : count; } @@ -758,9 +764,11 @@ static int css_reboot_event(struct notifier_block *this, struct channel_subsystem *css; css = channel_subsystems[i]; + mutex_lock(&css->mutex); if (css->cm_enabled) if (chsc_secm(css, 0)) ret = NOTIFY_BAD; + mutex_unlock(&css->mutex); } return ret; -- cgit v1.2.3-59-g8ed1b From d1e23375bf5d1079cd54a1c6bc8592c42061f1e1 Mon Sep 17 00:00:00 2001 From: Heiko Carstens Date: Thu, 17 Apr 2008 07:46:02 +0200 Subject: [S390] sclp: Get rid of in_atomic() use. Reintroduces in_interrupt() check in sclp_tty code. Add may_schedule parameter to vt220 write function, so we can let the write function know if it may schedule or not. So we disallow scheduling for all console calls and may allow them for tty calls. Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/char/sclp_tty.c | 2 +- drivers/s390/char/sclp_vt220.c | 13 ++++++------- 2 files changed, 7 insertions(+), 8 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/char/sclp_tty.c b/drivers/s390/char/sclp_tty.c index 2e616e33891d..e3b3d390b4a3 100644 --- a/drivers/s390/char/sclp_tty.c +++ b/drivers/s390/char/sclp_tty.c @@ -332,7 +332,7 @@ sclp_tty_write_string(const unsigned char *str, int count) if (sclp_ttybuf == NULL) { while (list_empty(&sclp_tty_pages)) { spin_unlock_irqrestore(&sclp_tty_lock, flags); - if (in_atomic()) + if (in_interrupt()) sclp_sync_wait(); else wait_event(sclp_tty_waitq, diff --git a/drivers/s390/char/sclp_vt220.c b/drivers/s390/char/sclp_vt220.c index f7b258dfd52c..ed507594e62b 100644 --- a/drivers/s390/char/sclp_vt220.c +++ b/drivers/s390/char/sclp_vt220.c @@ -383,7 +383,7 @@ sclp_vt220_timeout(unsigned long data) */ static int __sclp_vt220_write(const unsigned char *buf, int count, int do_schedule, - int convertlf) + int convertlf, int may_schedule) { unsigned long flags; void *page; @@ -398,9 +398,8 @@ __sclp_vt220_write(const unsigned char *buf, int count, int do_schedule, /* Create a sclp output buffer if none exists yet */ if (sclp_vt220_current_request == NULL) { while (list_empty(&sclp_vt220_empty)) { - spin_unlock_irqrestore(&sclp_vt220_lock, - flags); - if (in_atomic()) + spin_unlock_irqrestore(&sclp_vt220_lock, flags); + if (in_interrupt() || !may_schedule) sclp_sync_wait(); else wait_event(sclp_vt220_waitq, @@ -450,7 +449,7 @@ __sclp_vt220_write(const unsigned char *buf, int count, int do_schedule, static int sclp_vt220_write(struct tty_struct *tty, const unsigned char *buf, int count) { - return __sclp_vt220_write(buf, count, 1, 0); + return __sclp_vt220_write(buf, count, 1, 0, 1); } #define SCLP_VT220_SESSION_ENDED 0x01 @@ -529,7 +528,7 @@ sclp_vt220_close(struct tty_struct *tty, struct file *filp) static void sclp_vt220_put_char(struct tty_struct *tty, unsigned char ch) { - __sclp_vt220_write(&ch, 1, 0, 0); + __sclp_vt220_write(&ch, 1, 0, 0, 1); } /* @@ -746,7 +745,7 @@ __initcall(sclp_vt220_tty_init); static void sclp_vt220_con_write(struct console *con, const char *buf, unsigned int count) { - __sclp_vt220_write((const unsigned char *) buf, count, 1, 1); + __sclp_vt220_write((const unsigned char *) buf, count, 1, 1, 0); } static struct tty_driver * -- cgit v1.2.3-59-g8ed1b From 35b58b028dfc99dd390a09f66945947c4945fa64 Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Thu, 17 Apr 2008 07:46:03 +0200 Subject: [S390] qdio: Unrecognized inbound traffic if many FCP devices are online Problem: Usually every FCP device has its own indicator field the adapter uses to signal outstanding work. Once a certain limit of devices is reached, a common indicator field is used. In certain scenarios qdio resets this common indicator field, but handles only part of the FCP-devices sharing the common indicator field. Thus inbound traffic on the non-processed shared FCP-devices is not recognized immediately. Solution: Make sure common indicator field is reset only, if all FCP-devices sharing the indicator are processed. Signed-off-by: Ursula Braun Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/cio/qdio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/cio/qdio.c b/drivers/s390/cio/qdio.c index 2b5bfb7c69e5..b1d18aa471c9 100644 --- a/drivers/s390/cio/qdio.c +++ b/drivers/s390/cio/qdio.c @@ -1399,7 +1399,7 @@ __tiqdio_inbound_processing(struct qdio_q *q, int spare_ind_was_set) * q->dev_st_chg_ind is the indicator, be it shared or not. * only clear it, if indicator is non-shared */ - if (!spare_ind_was_set) + if (q->dev_st_chg_ind != &spare_indicator) tiqdio_clear_summary_bit((__u32*)q->dev_st_chg_ind); if (q->hydra_gives_outbound_pcis) { -- cgit v1.2.3-59-g8ed1b From 00966c0a5b00bc0afdc0bd0446adec271f8b098b Mon Sep 17 00:00:00 2001 From: Stefan Haberland Date: Thu, 17 Apr 2008 07:46:04 +0200 Subject: [S390] dasd: use GFP_DMA for fba private data allocation allocating dasd_fba_private without GFP_DMA results in IO error during read device characteristics of a FBA disk Signed-off-by: Stefan Haberland Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/block/dasd_fba.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/block/dasd_fba.c b/drivers/s390/block/dasd_fba.c index d13ea05089a7..116611583df8 100644 --- a/drivers/s390/block/dasd_fba.c +++ b/drivers/s390/block/dasd_fba.c @@ -125,7 +125,8 @@ dasd_fba_check_characteristics(struct dasd_device *device) private = (struct dasd_fba_private *) device->private; if (private == NULL) { - private = kzalloc(sizeof(struct dasd_fba_private), GFP_KERNEL); + private = kzalloc(sizeof(struct dasd_fba_private), + GFP_KERNEL | GFP_DMA); if (private == NULL) { DEV_MESSAGE(KERN_WARNING, device, "%s", "memory allocation failed for private " -- cgit v1.2.3-59-g8ed1b From 92bf435f383a6193d59c687ce87ccca3529c68a1 Mon Sep 17 00:00:00 2001 From: Michael Holzheu Date: Thu, 17 Apr 2008 07:46:05 +0200 Subject: [S390] tape: duplicate sysfs filename when setting tape device online When a tape device is set online, offline and online again, the following error message is printed on the console: "sysfs: duplicate filename 'non-rewinding' can not be created". The reason is that when setting a device online, the tape driver creates a sysfs symlink from the tape device to the tape class device. Unfortunately the symlink is not removed correctly, when the device is set offline. Instead of passing the tape device object to sysfs_remove_link, the class device object is used. This patch fixes this problem and uses the correct tape device object now. Signed-off-by: Michael Holzheu Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/char/tape_char.c | 4 ++-- drivers/s390/char/tape_class.c | 5 ++--- drivers/s390/char/tape_class.h | 2 +- 3 files changed, 5 insertions(+), 6 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/char/tape_char.c b/drivers/s390/char/tape_char.c index b830a8cbef78..ebe84067bae9 100644 --- a/drivers/s390/char/tape_char.c +++ b/drivers/s390/char/tape_char.c @@ -83,9 +83,9 @@ tapechar_setup_device(struct tape_device * device) void tapechar_cleanup_device(struct tape_device *device) { - unregister_tape_dev(device->rt); + unregister_tape_dev(&device->cdev->dev, device->rt); device->rt = NULL; - unregister_tape_dev(device->nt); + unregister_tape_dev(&device->cdev->dev, device->nt); device->nt = NULL; } diff --git a/drivers/s390/char/tape_class.c b/drivers/s390/char/tape_class.c index aa7f166f4034..6dfdb7c17981 100644 --- a/drivers/s390/char/tape_class.c +++ b/drivers/s390/char/tape_class.c @@ -99,11 +99,10 @@ fail_with_tcd: } EXPORT_SYMBOL(register_tape_dev); -void unregister_tape_dev(struct tape_class_device *tcd) +void unregister_tape_dev(struct device *device, struct tape_class_device *tcd) { if (tcd != NULL && !IS_ERR(tcd)) { - sysfs_remove_link(&tcd->class_device->kobj, - tcd->mode_name); + sysfs_remove_link(&device->kobj, tcd->mode_name); device_destroy(tape_class, tcd->char_device->dev); cdev_del(tcd->char_device); kfree(tcd); diff --git a/drivers/s390/char/tape_class.h b/drivers/s390/char/tape_class.h index e2b5ac918acf..707b7f48c232 100644 --- a/drivers/s390/char/tape_class.h +++ b/drivers/s390/char/tape_class.h @@ -56,6 +56,6 @@ struct tape_class_device *register_tape_dev( char * device_name, char * node_name ); -void unregister_tape_dev(struct tape_class_device *tcd); +void unregister_tape_dev(struct device *device, struct tape_class_device *tcd); #endif /* __TAPE_CLASS_H__ */ -- cgit v1.2.3-59-g8ed1b From a695f16729e00995fe72baf0e8bee4bf9c232ae0 Mon Sep 17 00:00:00 2001 From: Frank Munzert Date: Thu, 17 Apr 2008 07:46:06 +0200 Subject: [S390] vmur: Use wait queue instead of mutex to serialize open If user space opens a unit record device node then vmur is leaving the kernel with lock open_mutex still held to prevent other processes from opening the device simultaneously. This causes lockdep to complain about a lock held when returning to user space. Now the mutex is replaced by a wait queue to serialize device open. Signed-off-by: Frank Munzert Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/char/vmur.c | 24 +++++++++++++++++------- drivers/s390/char/vmur.h | 4 +++- 2 files changed, 20 insertions(+), 8 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/char/vmur.c b/drivers/s390/char/vmur.c index 7689b500a104..83ae9a852f00 100644 --- a/drivers/s390/char/vmur.c +++ b/drivers/s390/char/vmur.c @@ -100,7 +100,8 @@ static struct urdev *urdev_alloc(struct ccw_device *cdev) urd->reclen = cdev->id.driver_info; ccw_device_get_id(cdev, &urd->dev_id); mutex_init(&urd->io_mutex); - mutex_init(&urd->open_mutex); + init_waitqueue_head(&urd->wait); + spin_lock_init(&urd->open_lock); atomic_set(&urd->ref_count, 1); urd->cdev = cdev; get_device(&cdev->dev); @@ -678,17 +679,21 @@ static int ur_open(struct inode *inode, struct file *file) if (!urd) return -ENXIO; - if (file->f_flags & O_NONBLOCK) { - if (!mutex_trylock(&urd->open_mutex)) { + spin_lock(&urd->open_lock); + while (urd->open_flag) { + spin_unlock(&urd->open_lock); + if (file->f_flags & O_NONBLOCK) { rc = -EBUSY; goto fail_put; } - } else { - if (mutex_lock_interruptible(&urd->open_mutex)) { + if (wait_event_interruptible(urd->wait, urd->open_flag == 0)) { rc = -ERESTARTSYS; goto fail_put; } + spin_lock(&urd->open_lock); } + urd->open_flag++; + spin_unlock(&urd->open_lock); TRACE("ur_open\n"); @@ -720,7 +725,9 @@ static int ur_open(struct inode *inode, struct file *file) fail_urfile_free: urfile_free(urf); fail_unlock: - mutex_unlock(&urd->open_mutex); + spin_lock(&urd->open_lock); + urd->open_flag--; + spin_unlock(&urd->open_lock); fail_put: urdev_put(urd); return rc; @@ -731,7 +738,10 @@ static int ur_release(struct inode *inode, struct file *file) struct urfile *urf = file->private_data; TRACE("ur_release\n"); - mutex_unlock(&urf->urd->open_mutex); + spin_lock(&urf->urd->open_lock); + urf->urd->open_flag--; + spin_unlock(&urf->urd->open_lock); + wake_up_interruptible(&urf->urd->wait); urdev_put(urf->urd); urfile_free(urf); return 0; diff --git a/drivers/s390/char/vmur.h b/drivers/s390/char/vmur.h index fa959644735a..fa320ad4593d 100644 --- a/drivers/s390/char/vmur.h +++ b/drivers/s390/char/vmur.h @@ -62,7 +62,6 @@ struct file_control_block { struct urdev { struct ccw_device *cdev; /* Backpointer to ccw device */ struct mutex io_mutex; /* Serialises device IO */ - struct mutex open_mutex; /* Serialises access to device */ struct completion *io_done; /* do_ur_io waits; irq completes */ struct device *device; struct cdev *char_device; @@ -71,6 +70,9 @@ struct urdev { int class; /* VM device class */ int io_request_rc; /* return code from I/O request */ atomic_t ref_count; /* reference counter */ + wait_queue_head_t wait; /* wait queue to serialize open */ + int open_flag; /* "urdev is open" flag */ + spinlock_t open_lock; /* serialize critical sections */ }; /* -- cgit v1.2.3-59-g8ed1b From f60c768c387026499bbdefdd807d9124ae2b3a8c Mon Sep 17 00:00:00 2001 From: Stefan Haberland Date: Thu, 17 Apr 2008 07:46:08 +0200 Subject: [S390] dasd: add sim handling. Now the system reports system information messages (SIM) to the user. The System Reference Code (SRC) which is reported to the user gives the abbility to lookup the reason of the SIM online in the documentation of the storage server. Signed-off-by: Stefan Haberland Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/block/dasd_3990_erp.c | 34 ++++++++++++++++++++++++++++++++++ drivers/s390/block/dasd_eckd.c | 7 +++++++ drivers/s390/block/dasd_int.h | 6 ++++++ 3 files changed, 47 insertions(+) (limited to 'drivers/s390') diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c index b19db20a0bef..e6700df52df4 100644 --- a/drivers/s390/block/dasd_3990_erp.c +++ b/drivers/s390/block/dasd_3990_erp.c @@ -1995,6 +1995,36 @@ dasd_3990_erp_compound(struct dasd_ccw_req * erp, char *sense) } /* end dasd_3990_erp_compound */ +/* + *DASD_3990_ERP_HANDLE_SIM + * + *DESCRIPTION + * inspects the SIM SENSE data and starts an appropriate action + * + * PARAMETER + * sense sense data of the actual error + * + * RETURN VALUES + * none + */ +void +dasd_3990_erp_handle_sim(struct dasd_device *device, char *sense) +{ + /* print message according to log or message to operator mode */ + if ((sense[24] & DASD_SIM_MSG_TO_OP) || (sense[1] & 0x10)) { + + /* print SIM SRC from RefCode */ + DEV_MESSAGE(KERN_ERR, device, "SIM - SRC: " + "%02x%02x%02x%02x", sense[22], + sense[23], sense[11], sense[12]); + } else if (sense[24] & DASD_SIM_LOG) { + /* print SIM SRC Refcode */ + DEV_MESSAGE(KERN_WARNING, device, "SIM - SRC: " + "%02x%02x%02x%02x", sense[22], + sense[23], sense[11], sense[12]); + } +} + /* * DASD_3990_ERP_INSPECT_32 * @@ -2018,6 +2048,10 @@ dasd_3990_erp_inspect_32(struct dasd_ccw_req * erp, char *sense) erp->function = dasd_3990_erp_inspect_32; + /* check for SIM sense data */ + if ((sense[6] & DASD_SIM_SENSE) == DASD_SIM_SENSE) + dasd_3990_erp_handle_sim(device, sense); + if (sense[25] & DASD_SENSE_BIT_0) { /* compound program action codes (byte25 bit 0 == '1') */ diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c index 61f16937c1e0..a0edae091b5e 100644 --- a/drivers/s390/block/dasd_eckd.c +++ b/drivers/s390/block/dasd_eckd.c @@ -1415,6 +1415,13 @@ static void dasd_eckd_handle_unsolicited_interrupt(struct dasd_device *device, return; } + + /* service information message SIM */ + if ((irb->ecw[6] & DASD_SIM_SENSE) == DASD_SIM_SENSE) { + dasd_3990_erp_handle_sim(device, irb->ecw); + return; + } + /* just report other unsolicited interrupts */ DEV_MESSAGE(KERN_DEBUG, device, "%s", "unsolicited interrupt received"); diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h index 44b2984dfbee..6c624bf44617 100644 --- a/drivers/s390/block/dasd_int.h +++ b/drivers/s390/block/dasd_int.h @@ -72,6 +72,11 @@ struct dasd_block; #define DASD_SENSE_BIT_2 0x20 #define DASD_SENSE_BIT_3 0x10 +/* BIT DEFINITIONS FOR SIM SENSE */ +#define DASD_SIM_SENSE 0x0F +#define DASD_SIM_MSG_TO_OP 0x03 +#define DASD_SIM_LOG 0x0C + /* * SECTION: MACROs for klogd and s390 debug feature (dbf) */ @@ -621,6 +626,7 @@ void dasd_log_sense(struct dasd_ccw_req *, struct irb *); /* externals in dasd_3990_erp.c */ struct dasd_ccw_req *dasd_3990_erp_action(struct dasd_ccw_req *); +void dasd_3990_erp_handle_sim(struct dasd_device *, char *); /* externals in dasd_eer.c */ #ifdef CONFIG_DASD_EER -- cgit v1.2.3-59-g8ed1b From aa24f7f08baca5aa9201901131cbdd0b14deceb6 Mon Sep 17 00:00:00 2001 From: Christian Borntraeger Date: Thu, 17 Apr 2008 07:46:09 +0200 Subject: [S390] KVM preparation: split sysinfo definitions for kvm use drivers/s390/sysinfo.c uses the store system information intruction to query the system about information of the machine, the LPAR and additional hypervisors. KVM has to implement the host part for this instruction. To avoid code duplication, this patch splits the common definitions from sysinfo.c into a separate header file include/asm-s390/sysinfo.h for KVM use. Signed-off-by: Christian Borntraeger Signed-off-by: Carsten Otte Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/sysinfo.c | 100 +--------------------------------------- include/asm-s390/sysinfo.h | 111 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 112 insertions(+), 99 deletions(-) create mode 100644 include/asm-s390/sysinfo.h (limited to 'drivers/s390') diff --git a/drivers/s390/sysinfo.c b/drivers/s390/sysinfo.c index 291ff6235fe2..43fffa3b099d 100644 --- a/drivers/s390/sysinfo.c +++ b/drivers/s390/sysinfo.c @@ -11,111 +11,13 @@ #include #include #include +#include /* Sigh, math-emu. Don't ask. */ #include #include #include -struct sysinfo_1_1_1 { - char reserved_0[32]; - char manufacturer[16]; - char type[4]; - char reserved_1[12]; - char model_capacity[16]; - char sequence[16]; - char plant[4]; - char model[16]; -}; - -struct sysinfo_1_2_1 { - char reserved_0[80]; - char sequence[16]; - char plant[4]; - char reserved_1[2]; - unsigned short cpu_address; -}; - -struct sysinfo_1_2_2 { - char format; - char reserved_0[1]; - unsigned short acc_offset; - char reserved_1[24]; - unsigned int secondary_capability; - unsigned int capability; - unsigned short cpus_total; - unsigned short cpus_configured; - unsigned short cpus_standby; - unsigned short cpus_reserved; - unsigned short adjustment[0]; -}; - -struct sysinfo_1_2_2_extension { - unsigned int alt_capability; - unsigned short alt_adjustment[0]; -}; - -struct sysinfo_2_2_1 { - char reserved_0[80]; - char sequence[16]; - char plant[4]; - unsigned short cpu_id; - unsigned short cpu_address; -}; - -struct sysinfo_2_2_2 { - char reserved_0[32]; - unsigned short lpar_number; - char reserved_1; - unsigned char characteristics; - unsigned short cpus_total; - unsigned short cpus_configured; - unsigned short cpus_standby; - unsigned short cpus_reserved; - char name[8]; - unsigned int caf; - char reserved_2[16]; - unsigned short cpus_dedicated; - unsigned short cpus_shared; -}; - -#define LPAR_CHAR_DEDICATED (1 << 7) -#define LPAR_CHAR_SHARED (1 << 6) -#define LPAR_CHAR_LIMITED (1 << 5) - -struct sysinfo_3_2_2 { - char reserved_0[31]; - unsigned char count; - struct { - char reserved_0[4]; - unsigned short cpus_total; - unsigned short cpus_configured; - unsigned short cpus_standby; - unsigned short cpus_reserved; - char name[8]; - unsigned int caf; - char cpi[16]; - char reserved_1[24]; - - } vm[8]; -}; - -static inline int stsi(void *sysinfo, int fc, int sel1, int sel2) -{ - register int r0 asm("0") = (fc << 28) | sel1; - register int r1 asm("1") = sel2; - - asm volatile( - " stsi 0(%2)\n" - "0: jz 2f\n" - "1: lhi %0,%3\n" - "2:\n" - EX_TABLE(0b,1b) - : "+d" (r0) : "d" (r1), "a" (sysinfo), "K" (-ENOSYS) - : "cc", "memory" ); - return r0; -} - static inline int stsi_0(void) { int rc = stsi (NULL, 0, 0, 0); diff --git a/include/asm-s390/sysinfo.h b/include/asm-s390/sysinfo.h new file mode 100644 index 000000000000..014f2a24664e --- /dev/null +++ b/include/asm-s390/sysinfo.h @@ -0,0 +1,111 @@ +/* + * definition for store system information stsi + * + * Copyright IBM Corp. 2001,2008 + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License (version 2 only) + * as published by the Free Software Foundation. + * + * Author(s): Ulrich Weigand + * Christian Borntraeger + */ + +struct sysinfo_1_1_1 { + char reserved_0[32]; + char manufacturer[16]; + char type[4]; + char reserved_1[12]; + char model_capacity[16]; + char sequence[16]; + char plant[4]; + char model[16]; +}; + +struct sysinfo_1_2_1 { + char reserved_0[80]; + char sequence[16]; + char plant[4]; + char reserved_1[2]; + unsigned short cpu_address; +}; + +struct sysinfo_1_2_2 { + char format; + char reserved_0[1]; + unsigned short acc_offset; + char reserved_1[24]; + unsigned int secondary_capability; + unsigned int capability; + unsigned short cpus_total; + unsigned short cpus_configured; + unsigned short cpus_standby; + unsigned short cpus_reserved; + unsigned short adjustment[0]; +}; + +struct sysinfo_1_2_2_extension { + unsigned int alt_capability; + unsigned short alt_adjustment[0]; +}; + +struct sysinfo_2_2_1 { + char reserved_0[80]; + char sequence[16]; + char plant[4]; + unsigned short cpu_id; + unsigned short cpu_address; +}; + +struct sysinfo_2_2_2 { + char reserved_0[32]; + unsigned short lpar_number; + char reserved_1; + unsigned char characteristics; + unsigned short cpus_total; + unsigned short cpus_configured; + unsigned short cpus_standby; + unsigned short cpus_reserved; + char name[8]; + unsigned int caf; + char reserved_2[16]; + unsigned short cpus_dedicated; + unsigned short cpus_shared; +}; + +#define LPAR_CHAR_DEDICATED (1 << 7) +#define LPAR_CHAR_SHARED (1 << 6) +#define LPAR_CHAR_LIMITED (1 << 5) + +struct sysinfo_3_2_2 { + char reserved_0[31]; + unsigned char count; + struct { + char reserved_0[4]; + unsigned short cpus_total; + unsigned short cpus_configured; + unsigned short cpus_standby; + unsigned short cpus_reserved; + char name[8]; + unsigned int caf; + char cpi[16]; + char reserved_1[24]; + + } vm[8]; +}; + +static inline int stsi(void *sysinfo, int fc, int sel1, int sel2) +{ + register int r0 asm("0") = (fc << 28) | sel1; + register int r1 asm("1") = sel2; + + asm volatile( + " stsi 0(%2)\n" + "0: jz 2f\n" + "1: lhi %0,%3\n" + "2:\n" + EX_TABLE(0b, 1b) + : "+d" (r0) : "d" (r1), "a" (sysinfo), "K" (-ENOSYS) + : "cc", "memory"); + return r0; +} -- cgit v1.2.3-59-g8ed1b From cbce70e687bf9c7968d63f058b4c3d2e90008ce2 Mon Sep 17 00:00:00 2001 From: Martin Schwidefsky Date: Thu, 17 Apr 2008 07:46:10 +0200 Subject: [S390] Add new fields for System z10 to /proc/sysinfo Add permanent and temporary model capacity and the corresponding capacity value fields for the three capacity identifiers to the output of /proc/sysinfo. Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/sysinfo.c | 16 ++++++++++++++-- include/asm-s390/sysinfo.h | 5 +++++ 2 files changed, 19 insertions(+), 2 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/sysinfo.c b/drivers/s390/sysinfo.c index 43fffa3b099d..c3e4ab07b9cc 100644 --- a/drivers/s390/sysinfo.c +++ b/drivers/s390/sysinfo.c @@ -35,6 +35,8 @@ static int stsi_1_1_1(struct sysinfo_1_1_1 *info, char *page, int len) EBCASC(info->sequence, sizeof(info->sequence)); EBCASC(info->plant, sizeof(info->plant)); EBCASC(info->model_capacity, sizeof(info->model_capacity)); + EBCASC(info->model_perm_cap, sizeof(info->model_perm_cap)); + EBCASC(info->model_temp_cap, sizeof(info->model_temp_cap)); len += sprintf(page + len, "Manufacturer: %-16.16s\n", info->manufacturer); len += sprintf(page + len, "Type: %-4.4s\n", @@ -57,8 +59,18 @@ static int stsi_1_1_1(struct sysinfo_1_1_1 *info, char *page, int len) info->sequence); len += sprintf(page + len, "Plant: %-4.4s\n", info->plant); - len += sprintf(page + len, "Model Capacity: %-16.16s\n", - info->model_capacity); + len += sprintf(page + len, "Model Capacity: %-16.16s %08u\n", + info->model_capacity, *(u32 *) info->model_cap_rating); + if (info->model_perm_cap[0] != '\0') + len += sprintf(page + len, + "Model Perm. Capacity: %-16.16s %08u\n", + info->model_perm_cap, + *(u32 *) info->model_perm_cap_rating); + if (info->model_temp_cap[0] != '\0') + len += sprintf(page + len, + "Model Temp. Capacity: %-16.16s %08u\n", + info->model_temp_cap, + *(u32 *) info->model_temp_cap_rating); return len; } diff --git a/include/asm-s390/sysinfo.h b/include/asm-s390/sysinfo.h index 014f2a24664e..abe10ae15e46 100644 --- a/include/asm-s390/sysinfo.h +++ b/include/asm-s390/sysinfo.h @@ -20,6 +20,11 @@ struct sysinfo_1_1_1 { char sequence[16]; char plant[4]; char model[16]; + char model_perm_cap[16]; + char model_temp_cap[16]; + char model_cap_rating[4]; + char model_perm_cap_rating[4]; + char model_temp_cap_rating[4]; }; struct sysinfo_1_2_1 { -- cgit v1.2.3-59-g8ed1b From 2f7c8bd6dc6540aa3275c0ad9f657401985c00e9 Mon Sep 17 00:00:00 2001 From: Ralph Wuerthner Date: Thu, 17 Apr 2008 07:46:15 +0200 Subject: [S390] zcrypt: add support for large random numbers This patch allows user space applications to access large amounts of truly random data. The random data source is the build-in hardware random number generator on the CEX2C cards. Signed-off-by: Ralph Wuerthner Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/crypto/Kconfig | 1 + drivers/s390/crypto/zcrypt_api.c | 120 ++++++++++++++++++++++ drivers/s390/crypto/zcrypt_api.h | 8 ++ drivers/s390/crypto/zcrypt_pcixcc.c | 199 +++++++++++++++++++++++++++++++++++- 4 files changed, 327 insertions(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 6b658d84d521..6d2f0c8d419a 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -64,6 +64,7 @@ config ZCRYPT tristate "Support for PCI-attached cryptographic adapters" depends on S390 select ZCRYPT_MONOLITHIC if ZCRYPT="y" + select HW_RANDOM help Select this option if you want to use a PCI-attached cryptographic adapter like: diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c index e3625a47a596..b2740ff2e615 100644 --- a/drivers/s390/crypto/zcrypt_api.c +++ b/drivers/s390/crypto/zcrypt_api.c @@ -36,6 +36,7 @@ #include #include #include +#include #include "zcrypt_api.h" @@ -52,6 +53,9 @@ static LIST_HEAD(zcrypt_device_list); static int zcrypt_device_count = 0; static atomic_t zcrypt_open_count = ATOMIC_INIT(0); +static int zcrypt_rng_device_add(void); +static void zcrypt_rng_device_remove(void); + /** * Device attributes common for all crypto devices. */ @@ -216,6 +220,22 @@ int zcrypt_device_register(struct zcrypt_device *zdev) __zcrypt_increase_preference(zdev); zcrypt_device_count++; spin_unlock_bh(&zcrypt_device_lock); + if (zdev->ops->rng) { + rc = zcrypt_rng_device_add(); + if (rc) + goto out_unregister; + } + return 0; + +out_unregister: + spin_lock_bh(&zcrypt_device_lock); + zcrypt_device_count--; + list_del_init(&zdev->list); + spin_unlock_bh(&zcrypt_device_lock); + sysfs_remove_group(&zdev->ap_dev->device.kobj, + &zcrypt_device_attr_group); + put_device(&zdev->ap_dev->device); + zcrypt_device_put(zdev); out: return rc; } @@ -226,6 +246,8 @@ EXPORT_SYMBOL(zcrypt_device_register); */ void zcrypt_device_unregister(struct zcrypt_device *zdev) { + if (zdev->ops->rng) + zcrypt_rng_device_remove(); spin_lock_bh(&zcrypt_device_lock); zcrypt_device_count--; list_del_init(&zdev->list); @@ -427,6 +449,37 @@ static long zcrypt_send_cprb(struct ica_xcRB *xcRB) return -ENODEV; } +static long zcrypt_rng(char *buffer) +{ + struct zcrypt_device *zdev; + int rc; + + spin_lock_bh(&zcrypt_device_lock); + list_for_each_entry(zdev, &zcrypt_device_list, list) { + if (!zdev->online || !zdev->ops->rng) + continue; + zcrypt_device_get(zdev); + get_device(&zdev->ap_dev->device); + zdev->request_count++; + __zcrypt_decrease_preference(zdev); + if (try_module_get(zdev->ap_dev->drv->driver.owner)) { + spin_unlock_bh(&zcrypt_device_lock); + rc = zdev->ops->rng(zdev, buffer); + spin_lock_bh(&zcrypt_device_lock); + module_put(zdev->ap_dev->drv->driver.owner); + } else + rc = -EAGAIN; + zdev->request_count--; + __zcrypt_increase_preference(zdev); + put_device(&zdev->ap_dev->device); + zcrypt_device_put(zdev); + spin_unlock_bh(&zcrypt_device_lock); + return rc; + } + spin_unlock_bh(&zcrypt_device_lock); + return -ENODEV; +} + static void zcrypt_status_mask(char status[AP_DEVICES]) { struct zcrypt_device *zdev; @@ -1041,6 +1094,73 @@ out: return count; } +static int zcrypt_rng_device_count; +static u32 *zcrypt_rng_buffer; +static int zcrypt_rng_buffer_index; +static DEFINE_MUTEX(zcrypt_rng_mutex); + +static int zcrypt_rng_data_read(struct hwrng *rng, u32 *data) +{ + int rc; + + /** + * We don't need locking here because the RNG API guarantees serialized + * read method calls. + */ + if (zcrypt_rng_buffer_index == 0) { + rc = zcrypt_rng((char *) zcrypt_rng_buffer); + if (rc < 0) + return -EIO; + zcrypt_rng_buffer_index = rc / sizeof *data; + } + *data = zcrypt_rng_buffer[--zcrypt_rng_buffer_index]; + return sizeof *data; +} + +static struct hwrng zcrypt_rng_dev = { + .name = "zcrypt", + .data_read = zcrypt_rng_data_read, +}; + +static int zcrypt_rng_device_add(void) +{ + int rc = 0; + + mutex_lock(&zcrypt_rng_mutex); + if (zcrypt_rng_device_count == 0) { + zcrypt_rng_buffer = (u32 *) get_zeroed_page(GFP_KERNEL); + if (!zcrypt_rng_buffer) { + rc = -ENOMEM; + goto out; + } + zcrypt_rng_buffer_index = 0; + rc = hwrng_register(&zcrypt_rng_dev); + if (rc) + goto out_free; + zcrypt_rng_device_count = 1; + } else + zcrypt_rng_device_count++; + mutex_unlock(&zcrypt_rng_mutex); + return 0; + +out_free: + free_page((unsigned long) zcrypt_rng_buffer); +out: + mutex_unlock(&zcrypt_rng_mutex); + return rc; +} + +static void zcrypt_rng_device_remove(void) +{ + mutex_lock(&zcrypt_rng_mutex); + zcrypt_rng_device_count--; + if (zcrypt_rng_device_count == 0) { + hwrng_unregister(&zcrypt_rng_dev); + free_page((unsigned long) zcrypt_rng_buffer); + } + mutex_unlock(&zcrypt_rng_mutex); +} + /** * The module initialization code. */ diff --git a/drivers/s390/crypto/zcrypt_api.h b/drivers/s390/crypto/zcrypt_api.h index de4877ee618f..0e948528a73a 100644 --- a/drivers/s390/crypto/zcrypt_api.h +++ b/drivers/s390/crypto/zcrypt_api.h @@ -100,6 +100,13 @@ struct ica_z90_status { #define ZCRYPT_CEX2C 5 #define ZCRYPT_CEX2A 6 +/** + * Large random numbers are pulled in 4096 byte chunks from the crypto cards + * and stored in a page. Be carefull when increasing this buffer due to size + * limitations for AP requests. + */ +#define ZCRYPT_RNG_BUFFER_SIZE 4096 + struct zcrypt_device; struct zcrypt_ops { @@ -107,6 +114,7 @@ struct zcrypt_ops { long (*rsa_modexpo_crt)(struct zcrypt_device *, struct ica_rsa_modexpo_crt *); long (*send_cprb)(struct zcrypt_device *, struct ica_xcRB *); + long (*rng)(struct zcrypt_device *, char *); }; struct zcrypt_device { diff --git a/drivers/s390/crypto/zcrypt_pcixcc.c b/drivers/s390/crypto/zcrypt_pcixcc.c index 70b9ddc8cf9d..3674bfa82b65 100644 --- a/drivers/s390/crypto/zcrypt_pcixcc.c +++ b/drivers/s390/crypto/zcrypt_pcixcc.c @@ -355,6 +355,55 @@ static int XCRB_msg_to_type6CPRB_msgX(struct zcrypt_device *zdev, return 0; } +/** + * Prepare a type6 CPRB message for random number generation + * + * @ap_dev: AP device pointer + * @ap_msg: pointer to AP message + */ +static void rng_type6CPRB_msgX(struct ap_device *ap_dev, + struct ap_message *ap_msg, + unsigned random_number_length) +{ + struct { + struct type6_hdr hdr; + struct CPRBX cprbx; + char function_code[2]; + short int rule_length; + char rule[8]; + short int verb_length; + short int key_length; + } __attribute__((packed)) *msg = ap_msg->message; + static struct type6_hdr static_type6_hdrX = { + .type = 0x06, + .offset1 = 0x00000058, + .agent_id = {'C', 'A'}, + .function_code = {'R', 'L'}, + .ToCardLen1 = sizeof *msg - sizeof(msg->hdr), + .FromCardLen1 = sizeof *msg - sizeof(msg->hdr), + }; + static struct CPRBX static_cprbx = { + .cprb_len = 0x00dc, + .cprb_ver_id = 0x02, + .func_id = {0x54, 0x32}, + .req_parml = sizeof *msg - sizeof(msg->hdr) - + sizeof(msg->cprbx), + .rpl_msgbl = sizeof *msg - sizeof(msg->hdr), + }; + + msg->hdr = static_type6_hdrX; + msg->hdr.FromCardLen2 = random_number_length, + msg->cprbx = static_cprbx; + msg->cprbx.rpl_datal = random_number_length, + msg->cprbx.domain = AP_QID_QUEUE(ap_dev->qid); + memcpy(msg->function_code, msg->hdr.function_code, 0x02); + msg->rule_length = 0x0a; + memcpy(msg->rule, "RANDOM ", 8); + msg->verb_length = 0x02; + msg->key_length = 0x02; + ap_msg->length = sizeof *msg; +} + /** * Copy results from a type 86 ICA reply message back to user space. * @@ -509,6 +558,26 @@ static int convert_type86_xcrb(struct zcrypt_device *zdev, return 0; } +static int convert_type86_rng(struct zcrypt_device *zdev, + struct ap_message *reply, + char *buffer) +{ + struct { + struct type86_hdr hdr; + struct type86_fmt2_ext fmt2; + struct CPRBX cprbx; + } __attribute__((packed)) *msg = reply->message; + char *data = reply->message; + + if (msg->cprbx.ccp_rtcode != 0 || msg->cprbx.ccp_rscode != 0) { + PDEBUG("RNG response error on PCIXCC/CEX2C rc=%hu/rs=%hu\n", + rc, rs); + return -EINVAL; + } + memcpy(buffer, data + msg->fmt2.offset2, msg->fmt2.count2); + return msg->fmt2.count2; +} + static int convert_response_ica(struct zcrypt_device *zdev, struct ap_message *reply, char __user *outputdata, @@ -567,6 +636,31 @@ static int convert_response_xcrb(struct zcrypt_device *zdev, } } +static int convert_response_rng(struct zcrypt_device *zdev, + struct ap_message *reply, + char *data) +{ + struct type86x_reply *msg = reply->message; + + switch (msg->hdr.type) { + case TYPE82_RSP_CODE: + case TYPE88_RSP_CODE: + return -EINVAL; + case TYPE86_RSP_CODE: + if (msg->hdr.reply_code) + return -EINVAL; + if (msg->cprbx.cprb_ver_id == 0x02) + return convert_type86_rng(zdev, reply, data); + /* no break, incorrect cprb version is an unknown response */ + default: /* Unknown response type, this should NEVER EVER happen */ + PRINTK("Unrecognized Message Header: %08x%08x\n", + *(unsigned int *) reply->message, + *(unsigned int *) (reply->message+4)); + zdev->online = 0; + return -EAGAIN; /* repeat the request on a different device. */ + } +} + /** * This function is called from the AP bus code after a crypto request * "msg" has finished with the reply message "reply". @@ -735,6 +829,42 @@ out_free: return rc; } +/** + * The request distributor calls this function if it picked the PCIXCC/CEX2C + * device to generate random data. + * @zdev: pointer to zcrypt_device structure that identifies the + * PCIXCC/CEX2C device to the request distributor + * @buffer: pointer to a memory page to return random data + */ + +static long zcrypt_pcixcc_rng(struct zcrypt_device *zdev, + char *buffer) +{ + struct ap_message ap_msg; + struct response_type resp_type = { + .type = PCIXCC_RESPONSE_TYPE_XCRB, + }; + int rc; + + ap_msg.message = kmalloc(PCIXCC_MAX_XCRB_MESSAGE_SIZE, GFP_KERNEL); + if (!ap_msg.message) + return -ENOMEM; + ap_msg.psmid = (((unsigned long long) current->pid) << 32) + + atomic_inc_return(&zcrypt_step); + ap_msg.private = &resp_type; + rng_type6CPRB_msgX(zdev->ap_dev, &ap_msg, ZCRYPT_RNG_BUFFER_SIZE); + init_completion(&resp_type.work); + ap_queue_message(zdev->ap_dev, &ap_msg); + rc = wait_for_completion_interruptible(&resp_type.work); + if (rc == 0) + rc = convert_response_rng(zdev, &ap_msg, buffer); + else + /* Signal pending. */ + ap_cancel_message(zdev->ap_dev, &ap_msg); + kfree(ap_msg.message); + return rc; +} + /** * The crypto operations for a PCIXCC/CEX2C card. */ @@ -744,6 +874,13 @@ static struct zcrypt_ops zcrypt_pcixcc_ops = { .send_cprb = zcrypt_pcixcc_send_cprb, }; +static struct zcrypt_ops zcrypt_pcixcc_with_rng_ops = { + .rsa_modexpo = zcrypt_pcixcc_modexpo, + .rsa_modexpo_crt = zcrypt_pcixcc_modexpo_crt, + .send_cprb = zcrypt_pcixcc_send_cprb, + .rng = zcrypt_pcixcc_rng, +}; + /** * Micro-code detection function. Its sends a message to a pcixcc card * to find out the microcode level. @@ -858,6 +995,58 @@ out_free: return rc; } +/** + * Large random number detection function. Its sends a message to a pcixcc + * card to find out if large random numbers are supported. + * @ap_dev: pointer to the AP device. + * + * Returns 1 if large random numbers are supported, 0 if not and < 0 on error. + */ +static int zcrypt_pcixcc_rng_supported(struct ap_device *ap_dev) +{ + struct ap_message ap_msg; + unsigned long long psmid; + struct { + struct type86_hdr hdr; + struct type86_fmt2_ext fmt2; + struct CPRBX cprbx; + } __attribute__((packed)) *reply; + int rc, i; + + ap_msg.message = (void *) get_zeroed_page(GFP_KERNEL); + if (!ap_msg.message) + return -ENOMEM; + + rng_type6CPRB_msgX(ap_dev, &ap_msg, 4); + rc = ap_send(ap_dev->qid, 0x0102030405060708ULL, ap_msg.message, + ap_msg.length); + if (rc) + goto out_free; + + /* Wait for the test message to complete. */ + for (i = 0; i < 2 * HZ; i++) { + msleep(1000 / HZ); + rc = ap_recv(ap_dev->qid, &psmid, ap_msg.message, 4096); + if (rc == 0 && psmid == 0x0102030405060708ULL) + break; + } + + if (i >= 2 * HZ) { + /* Got no answer. */ + rc = -ENODEV; + goto out_free; + } + + reply = ap_msg.message; + if (reply->cprbx.ccp_rtcode == 0 && reply->cprbx.ccp_rscode == 0) + rc = 1; + else + rc = 0; +out_free: + free_page((unsigned long) ap_msg.message); + return rc; +} + /** * Probe function for PCIXCC/CEX2C cards. It always accepts the AP device * since the bus_match already checked the hardware type. The PCIXCC @@ -874,7 +1063,6 @@ static int zcrypt_pcixcc_probe(struct ap_device *ap_dev) if (!zdev) return -ENOMEM; zdev->ap_dev = ap_dev; - zdev->ops = &zcrypt_pcixcc_ops; zdev->online = 1; if (ap_dev->device_type == AP_DEVICE_TYPE_PCIXCC) { rc = zcrypt_pcixcc_mcl(ap_dev); @@ -901,6 +1089,15 @@ static int zcrypt_pcixcc_probe(struct ap_device *ap_dev) zdev->min_mod_size = PCIXCC_MIN_MOD_SIZE; zdev->max_mod_size = PCIXCC_MAX_MOD_SIZE; } + rc = zcrypt_pcixcc_rng_supported(ap_dev); + if (rc < 0) { + zcrypt_device_free(zdev); + return rc; + } + if (rc) + zdev->ops = &zcrypt_pcixcc_with_rng_ops; + else + zdev->ops = &zcrypt_pcixcc_ops; ap_dev->reply = &zdev->reply; ap_dev->private = zdev; rc = zcrypt_device_register(zdev); -- cgit v1.2.3-59-g8ed1b From 2a2cf6b18626e66b7898013dfa4df8fe2feca568 Mon Sep 17 00:00:00 2001 From: Harvey Harrison Date: Thu, 17 Apr 2008 07:46:21 +0200 Subject: [S390] replace remaining __FUNCTION__ occurrences __FUNCTION__ is gcc-specific, use __func__ Signed-off-by: Harvey Harrison Signed-off-by: Andrew Morton Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/block/dasd.c | 4 +- drivers/s390/char/tape_34xx.c | 2 +- drivers/s390/char/vmwatchdog.c | 4 +- drivers/s390/char/zcore.c | 2 +- drivers/s390/cio/device_status.c | 2 +- drivers/s390/crypto/zcrypt_api.h | 8 +- drivers/s390/net/claw.c | 344 +++++++++++++++++++-------------------- drivers/s390/net/netiucv.c | 97 ++++++----- drivers/s390/s390mach.c | 8 +- drivers/s390/scsi/zfcp_def.h | 2 +- 10 files changed, 236 insertions(+), 237 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index 54f686d2c694..bb72e0a5b0e0 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -980,12 +980,12 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm, break; case -ETIMEDOUT: printk(KERN_WARNING"%s(%s): request timed out\n", - __FUNCTION__, cdev->dev.bus_id); + __func__, cdev->dev.bus_id); //FIXME - dasd uses own timeout interface... break; default: printk(KERN_WARNING"%s(%s): unknown error %ld\n", - __FUNCTION__, cdev->dev.bus_id, PTR_ERR(irb)); + __func__, cdev->dev.bus_id, PTR_ERR(irb)); } return; } diff --git a/drivers/s390/char/tape_34xx.c b/drivers/s390/char/tape_34xx.c index 5b47e9cce75f..874adf365e46 100644 --- a/drivers/s390/char/tape_34xx.c +++ b/drivers/s390/char/tape_34xx.c @@ -394,7 +394,7 @@ tape_34xx_unit_check(struct tape_device *device, struct tape_request *request, return tape_34xx_erp_failed(request, -ENOSPC); default: PRINT_ERR("Invalid op in %s:%i\n", - __FUNCTION__, __LINE__); + __func__, __LINE__); return tape_34xx_erp_failed(request, 0); } } diff --git a/drivers/s390/char/vmwatchdog.c b/drivers/s390/char/vmwatchdog.c index 6f40facb1c4d..19f8389291b6 100644 --- a/drivers/s390/char/vmwatchdog.c +++ b/drivers/s390/char/vmwatchdog.c @@ -96,7 +96,7 @@ static int vmwdt_keepalive(void) if (ret) { printk(KERN_WARNING "%s: problem setting interval %d, " - "cmd %s\n", __FUNCTION__, vmwdt_interval, + "cmd %s\n", __func__, vmwdt_interval, vmwdt_cmd); } return ret; @@ -107,7 +107,7 @@ static int vmwdt_disable(void) int ret = __diag288(wdt_cancel, 0, "", 0); if (ret) { printk(KERN_WARNING "%s: problem disabling watchdog\n", - __FUNCTION__); + __func__); } return ret; } diff --git a/drivers/s390/char/zcore.c b/drivers/s390/char/zcore.c index f523501e6e6c..bbbd14e9d48f 100644 --- a/drivers/s390/char/zcore.c +++ b/drivers/s390/char/zcore.c @@ -224,7 +224,7 @@ static int __init init_cpu_info(enum arch_id arch) sa = kmalloc(sizeof(*sa), GFP_KERNEL); if (!sa) { - ERROR_MSG("kmalloc failed: %s: %i\n",__FUNCTION__, __LINE__); + ERROR_MSG("kmalloc failed: %s: %i\n",__func__, __LINE__); return -ENOMEM; } if (memcpy_hsa_kernel(sa, sys_info.sa_base, sys_info.sa_size) < 0) { diff --git a/drivers/s390/cio/device_status.c b/drivers/s390/cio/device_status.c index 4764b9e00b9e..4a38993000f2 100644 --- a/drivers/s390/cio/device_status.c +++ b/drivers/s390/cio/device_status.c @@ -62,7 +62,7 @@ ccw_device_path_notoper(struct ccw_device *cdev) stsch (sch->schid, &sch->schib); CIO_MSG_EVENT(0, "%s(0.%x.%04x) - path(s) %02x are " - "not operational \n", __FUNCTION__, + "not operational \n", __func__, sch->schid.ssid, sch->schid.sch_no, sch->schib.pmcw.pnom); diff --git a/drivers/s390/crypto/zcrypt_api.h b/drivers/s390/crypto/zcrypt_api.h index 0e948528a73a..5c6e222b2ac4 100644 --- a/drivers/s390/crypto/zcrypt_api.h +++ b/drivers/s390/crypto/zcrypt_api.h @@ -43,17 +43,17 @@ #define DEV_NAME "zcrypt" #define PRINTK(fmt, args...) \ - printk(KERN_DEBUG DEV_NAME ": %s -> " fmt, __FUNCTION__ , ## args) + printk(KERN_DEBUG DEV_NAME ": %s -> " fmt, __func__ , ## args) #define PRINTKN(fmt, args...) \ printk(KERN_DEBUG DEV_NAME ": " fmt, ## args) #define PRINTKW(fmt, args...) \ - printk(KERN_WARNING DEV_NAME ": %s -> " fmt, __FUNCTION__ , ## args) + printk(KERN_WARNING DEV_NAME ": %s -> " fmt, __func__ , ## args) #define PRINTKC(fmt, args...) \ - printk(KERN_CRIT DEV_NAME ": %s -> " fmt, __FUNCTION__ , ## args) + printk(KERN_CRIT DEV_NAME ": %s -> " fmt, __func__ , ## args) #ifdef ZCRYPT_DEBUG #define PDEBUG(fmt, args...) \ - printk(KERN_DEBUG DEV_NAME ": %s -> " fmt, __FUNCTION__ , ## args) + printk(KERN_DEBUG DEV_NAME ": %s -> " fmt, __func__ , ## args) #else #define PDEBUG(fmt, args...) do {} while (0) #endif diff --git a/drivers/s390/net/claw.c b/drivers/s390/net/claw.c index d8a5c229c5a7..04a1d7bf678c 100644 --- a/drivers/s390/net/claw.c +++ b/drivers/s390/net/claw.c @@ -299,7 +299,7 @@ claw_probe(struct ccwgroup_device *cgdev) struct claw_privbk *privptr=NULL; #ifdef FUNCTRACE - printk(KERN_INFO "%s Enter\n",__FUNCTION__); + printk(KERN_INFO "%s Enter\n",__func__); #endif CLAW_DBF_TEXT(2,setup,"probe"); if (!get_device(&cgdev->dev)) @@ -313,7 +313,7 @@ claw_probe(struct ccwgroup_device *cgdev) probe_error(cgdev); put_device(&cgdev->dev); printk(KERN_WARNING "Out of memory %s %s Exit Line %d \n", - cgdev->cdev[0]->dev.bus_id,__FUNCTION__,__LINE__); + cgdev->cdev[0]->dev.bus_id,__func__,__LINE__); CLAW_DBF_TEXT_(2,setup,"probex%d",-ENOMEM); return -ENOMEM; } @@ -323,7 +323,7 @@ claw_probe(struct ccwgroup_device *cgdev) probe_error(cgdev); put_device(&cgdev->dev); printk(KERN_WARNING "Out of memory %s %s Exit Line %d \n", - cgdev->cdev[0]->dev.bus_id,__FUNCTION__,__LINE__); + cgdev->cdev[0]->dev.bus_id,__func__,__LINE__); CLAW_DBF_TEXT_(2,setup,"probex%d",-ENOMEM); return -ENOMEM; } @@ -340,7 +340,7 @@ claw_probe(struct ccwgroup_device *cgdev) probe_error(cgdev); put_device(&cgdev->dev); printk(KERN_WARNING "add_files failed %s %s Exit Line %d \n", - cgdev->cdev[0]->dev.bus_id,__FUNCTION__,__LINE__); + cgdev->cdev[0]->dev.bus_id,__func__,__LINE__); CLAW_DBF_TEXT_(2,setup,"probex%d",rc); return rc; } @@ -351,7 +351,7 @@ claw_probe(struct ccwgroup_device *cgdev) cgdev->dev.driver_data = privptr; #ifdef FUNCTRACE printk(KERN_INFO "claw:%s exit on line %d, " - "rc = 0\n",__FUNCTION__,__LINE__); + "rc = 0\n",__func__,__LINE__); #endif CLAW_DBF_TEXT(2,setup,"prbext 0"); @@ -371,7 +371,7 @@ claw_tx(struct sk_buff *skb, struct net_device *dev) struct chbk *p_ch; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"claw_tx"); p_ch=&privptr->channel[WRITE]; @@ -381,7 +381,7 @@ claw_tx(struct sk_buff *skb, struct net_device *dev) privptr->stats.tx_dropped++; #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() exit on line %d, rc = EIO\n", - dev->name,__FUNCTION__, __LINE__); + dev->name,__func__, __LINE__); #endif CLAW_DBF_TEXT_(2,trace,"clawtx%d",-EIO); return -EIO; @@ -398,7 +398,7 @@ claw_tx(struct sk_buff *skb, struct net_device *dev) spin_unlock_irqrestore(get_ccwdev_lock(p_ch->cdev), saveflags); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s exit on line %d, rc = %d\n", - dev->name, __FUNCTION__, __LINE__, rc); + dev->name, __func__, __LINE__, rc); #endif CLAW_DBF_TEXT_(4,trace,"clawtx%d",rc); return rc; @@ -460,7 +460,7 @@ claw_pack_skb(struct claw_privbk *privptr) #ifdef IOTRACE printk(KERN_INFO "%s: %s() Packed %d len %d\n", p_env->ndev->name, - __FUNCTION__,pkt_cnt,new_skb->len); + __func__,pkt_cnt,new_skb->len); #endif } CLAW_DBF_TEXT(4,trace,"PackSKBx"); @@ -478,7 +478,7 @@ claw_change_mtu(struct net_device *dev, int new_mtu) struct claw_privbk *privptr=dev->priv; int buff_size; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); #endif #ifdef DEBUGMSG printk(KERN_INFO "variable dev =\n"); @@ -491,14 +491,14 @@ claw_change_mtu(struct net_device *dev, int new_mtu) #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc=EINVAL\n", dev->name, - __FUNCTION__, __LINE__); + __func__, __LINE__); #endif return -EINVAL; } dev->mtu = new_mtu; #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n",dev->name, - __FUNCTION__, __LINE__); + __func__, __LINE__); #endif return 0; } /* end of claw_change_mtu */ @@ -522,7 +522,7 @@ claw_open(struct net_device *dev) struct ccwbk *p_buf; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"open"); if (!dev || (dev->name[0] == 0x00)) { @@ -537,7 +537,7 @@ claw_open(struct net_device *dev) if (rc) { printk(KERN_INFO "%s:%s Exit on line %d, rc=ENOMEM\n", dev->name, - __FUNCTION__, __LINE__); + __func__, __LINE__); CLAW_DBF_TEXT(2,trace,"openmem"); return -ENOMEM; } @@ -661,7 +661,7 @@ claw_open(struct net_device *dev) claw_clear_busy(dev); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc=EIO\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(2,trace,"open EIO"); return -EIO; @@ -673,7 +673,7 @@ claw_open(struct net_device *dev) #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc=0\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"openok"); return 0; @@ -696,7 +696,7 @@ claw_irq_handler(struct ccw_device *cdev, #ifdef FUNCTRACE - printk(KERN_INFO "%s enter \n",__FUNCTION__); + printk(KERN_INFO "%s enter \n",__func__); #endif CLAW_DBF_TEXT(4,trace,"clawirq"); /* Bypass all 'unsolicited interrupts' */ @@ -706,7 +706,7 @@ claw_irq_handler(struct ccw_device *cdev, cdev->dev.bus_id,irb->scsw.cstat, irb->scsw.dstat); #ifdef FUNCTRACE printk(KERN_INFO "claw: %s() " - "exit on line %d\n",__FUNCTION__,__LINE__); + "exit on line %d\n",__func__,__LINE__); #endif CLAW_DBF_TEXT(2,trace,"badirq"); return; @@ -752,7 +752,7 @@ claw_irq_handler(struct ccw_device *cdev, #endif #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(2,trace,"chanchk"); /* return; */ @@ -777,7 +777,7 @@ claw_irq_handler(struct ccw_device *cdev, (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND)))) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return; } @@ -788,7 +788,7 @@ claw_irq_handler(struct ccw_device *cdev, #endif #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"stop"); return; @@ -804,7 +804,7 @@ claw_irq_handler(struct ccw_device *cdev, (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND)))) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"haltio"); return; @@ -838,7 +838,7 @@ claw_irq_handler(struct ccw_device *cdev, #endif #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"haltio"); return; @@ -858,7 +858,7 @@ claw_irq_handler(struct ccw_device *cdev, } #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"notrdy"); return; @@ -874,7 +874,7 @@ claw_irq_handler(struct ccw_device *cdev, } #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"PCI_read"); return; @@ -885,7 +885,7 @@ claw_irq_handler(struct ccw_device *cdev, (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND)))) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"SPend_rd"); return; @@ -906,7 +906,7 @@ claw_irq_handler(struct ccw_device *cdev, #endif #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"RdIRQXit"); return; @@ -929,7 +929,7 @@ claw_irq_handler(struct ccw_device *cdev, } #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"rstrtwrt"); return; @@ -946,7 +946,7 @@ claw_irq_handler(struct ccw_device *cdev, (SCSW_STCTL_ALERT_STATUS | SCSW_STCTL_STATUS_PEND)))) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"writeUE"); return; @@ -969,7 +969,7 @@ claw_irq_handler(struct ccw_device *cdev, #endif #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"StWtExit"); return; @@ -978,7 +978,7 @@ claw_irq_handler(struct ccw_device *cdev, "state=%d\n",dev->name,p_ch->claw_state); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(2,trace,"badIRQ"); return; @@ -1001,7 +1001,7 @@ claw_irq_tasklet ( unsigned long data ) p_ch = (struct chbk *) data; dev = (struct net_device *)p_ch->ndev; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); #endif #ifdef DEBUGMSG printk(KERN_INFO "%s: variable p_ch =\n",dev->name); @@ -1021,7 +1021,7 @@ claw_irq_tasklet ( unsigned long data ) CLAW_DBF_TEXT(4,trace,"TskletXt"); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return; } /* end of claw_irq_bh */ @@ -1048,7 +1048,7 @@ claw_release(struct net_device *dev) if (!privptr) return 0; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"release"); #ifdef DEBUGMSG @@ -1090,7 +1090,7 @@ claw_release(struct net_device *dev) if(privptr->buffs_alloc != 1) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"none2fre"); return 0; @@ -1171,7 +1171,7 @@ claw_release(struct net_device *dev) } #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"rlsexit"); return 0; @@ -1192,7 +1192,7 @@ claw_write_retry ( struct chbk *p_ch ) #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); printk(KERN_INFO "claw: variable p_ch =\n"); dumpit((char *) p_ch, sizeof(struct chbk)); #endif @@ -1200,20 +1200,20 @@ claw_write_retry ( struct chbk *p_ch ) if (p_ch->claw_state == CLAW_STOP) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return; } #ifdef DEBUGMSG printk( KERN_INFO "%s:%s state-%02x\n" , dev->name, - __FUNCTION__, + __func__, p_ch->claw_state); #endif claw_strt_out_IO( dev ); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"rtry_xit"); return; @@ -1235,7 +1235,7 @@ claw_write_next ( struct chbk * p_ch ) int rc; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",p_ch->ndev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",p_ch->ndev->name,__func__); printk(KERN_INFO "%s: variable p_ch =\n",p_ch->ndev->name); dumpit((char *) p_ch, sizeof(struct chbk)); #endif @@ -1262,7 +1262,7 @@ claw_write_next ( struct chbk * p_ch ) #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return; } /* end of claw_write_next */ @@ -1276,7 +1276,7 @@ static void claw_timer ( struct chbk * p_ch ) { #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Entry\n",p_ch->ndev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Entry\n",p_ch->ndev->name,__func__); printk(KERN_INFO "%s: variable p_ch =\n",p_ch->ndev->name); dumpit((char *) p_ch, sizeof(struct chbk)); #endif @@ -1285,7 +1285,7 @@ claw_timer ( struct chbk * p_ch ) wake_up(&p_ch->wait); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - p_ch->ndev->name,__FUNCTION__,__LINE__); + p_ch->ndev->name,__func__,__LINE__); #endif return; } /* end of claw_timer */ @@ -1312,7 +1312,7 @@ pages_to_order_of_mag(int num_of_pages) int order_of_mag=1; /* assume 2 pages */ int nump=2; #ifdef FUNCTRACE - printk(KERN_INFO "%s Enter pages = %d \n",__FUNCTION__,num_of_pages); + printk(KERN_INFO "%s Enter pages = %d \n",__func__,num_of_pages); #endif CLAW_DBF_TEXT_(5,trace,"pages%d",num_of_pages); if (num_of_pages == 1) {return 0; } /* magnitude of 0 = 1 page */ @@ -1327,7 +1327,7 @@ pages_to_order_of_mag(int num_of_pages) if (order_of_mag > 9) { order_of_mag = 9; } /* I know it's paranoid */ #ifdef FUNCTRACE printk(KERN_INFO "%s Exit on line %d, order = %d\n", - __FUNCTION__,__LINE__, order_of_mag); + __func__,__LINE__, order_of_mag); #endif CLAW_DBF_TEXT_(5,trace,"mag%d",order_of_mag); return order_of_mag; @@ -1349,7 +1349,7 @@ add_claw_reads(struct net_device *dev, struct ccwbk* p_first, struct ccwbk* p_buf; #endif #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); #endif #ifdef DEBUGMSG printk(KERN_INFO "dev\n"); @@ -1369,7 +1369,7 @@ add_claw_reads(struct net_device *dev, struct ccwbk* p_first, if ( p_first==NULL) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"addexit"); return 0; @@ -1400,9 +1400,9 @@ add_claw_reads(struct net_device *dev, struct ccwbk* p_first, if ( privptr-> p_read_active_first ==NULL ) { #ifdef DEBUGMSG printk(KERN_INFO "%s:%s p_read_active_first == NULL \n", - dev->name,__FUNCTION__); + dev->name,__func__); printk(KERN_INFO "%s:%s Read active first/last changed \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif privptr-> p_read_active_first= p_first; /* set new first */ privptr-> p_read_active_last = p_last; /* set new last */ @@ -1411,7 +1411,7 @@ add_claw_reads(struct net_device *dev, struct ccwbk* p_first, #ifdef DEBUGMSG printk(KERN_INFO "%s:%s Read in progress \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif /* set up TIC ccw */ temp_ccw.cda= (__u32)__pa(&p_first->read); @@ -1450,15 +1450,15 @@ add_claw_reads(struct net_device *dev, struct ccwbk* p_first, privptr->p_read_active_last=p_last; } /* end of if ( privptr-> p_read_active_first ==NULL) */ #ifdef IOTRACE - printk(KERN_INFO "%s:%s dump p_last CCW BK \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s dump p_last CCW BK \n",dev->name,__func__); dumpit((char *)p_last, sizeof(struct ccwbk)); - printk(KERN_INFO "%s:%s dump p_end CCW BK \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s dump p_end CCW BK \n",dev->name,__func__); dumpit((char *)p_end, sizeof(struct endccw)); - printk(KERN_INFO "%s:%s dump p_first CCW BK \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s dump p_first CCW BK \n",dev->name,__func__); dumpit((char *)p_first, sizeof(struct ccwbk)); printk(KERN_INFO "%s:%s Dump Active CCW chain \n", - dev->name,__FUNCTION__); + dev->name,__func__); p_buf=privptr->p_read_active_first; while (p_buf!=NULL) { dumpit((char *)p_buf, sizeof(struct ccwbk)); @@ -1467,7 +1467,7 @@ add_claw_reads(struct net_device *dev, struct ccwbk* p_first, #endif #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"addexit"); return 0; @@ -1483,7 +1483,7 @@ ccw_check_return_code(struct ccw_device *cdev, int return_code) { #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > enter \n", - cdev->dev.bus_id,__FUNCTION__); + cdev->dev.bus_id,__func__); #endif CLAW_DBF_TEXT(4,trace,"ccwret"); #ifdef DEBUGMSG @@ -1516,7 +1516,7 @@ ccw_check_return_code(struct ccw_device *cdev, int return_code) } #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > exit on line %d\n", - cdev->dev.bus_id,__FUNCTION__,__LINE__); + cdev->dev.bus_id,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"ccwret"); } /* end of ccw_check_return_code */ @@ -1531,7 +1531,7 @@ ccw_check_unit_check(struct chbk * p_ch, unsigned char sense ) struct net_device *dev = p_ch->ndev; #ifdef FUNCTRACE - printk(KERN_INFO "%s: %s() > enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s: %s() > enter\n",dev->name,__func__); #endif #ifdef DEBUGMSG printk(KERN_INFO "%s: variable dev =\n",dev->name); @@ -1578,7 +1578,7 @@ ccw_check_unit_check(struct chbk * p_ch, unsigned char sense ) #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif } /* end of ccw_check_unit_check */ @@ -1706,7 +1706,7 @@ find_link(struct net_device *dev, char *host_name, char *ws_name ) int rc=0; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s > enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s > enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"findlink"); #ifdef DEBUGMSG @@ -1739,7 +1739,7 @@ find_link(struct net_device *dev, char *host_name, char *ws_name ) #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return 0; } /* end of find_link */ @@ -1773,7 +1773,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid) struct ccwbk *p_buf; #endif #ifdef FUNCTRACE - printk(KERN_INFO "%s: %s() > enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s: %s() > enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"hw_tx"); #ifdef DEBUGMSG @@ -1787,7 +1787,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid) p_ch=(struct chbk *)&privptr->channel[WRITE]; p_env =privptr->p_env; #ifdef IOTRACE - printk(KERN_INFO "%s: %s() dump sk_buff \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s: %s() dump sk_buff \n",dev->name,__func__); dumpit((char *)skb ,sizeof(struct sk_buff)); #endif claw_free_wrt_buf(dev); /* Clean up free chain if posible */ @@ -1877,7 +1877,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid) while (len_of_data > 0) { #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() length-of-data is %ld \n", - dev->name ,__FUNCTION__,len_of_data); + dev->name ,__func__,len_of_data); dumpit((char *)pDataAddress ,64); #endif p_this_ccw=privptr->p_write_free_chain; /* get a block */ @@ -1913,7 +1913,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid) p_last_ccw=p_this_ccw; /* save new last block */ #ifdef IOTRACE printk(KERN_INFO "%s: %s() > CCW and Buffer %ld bytes long \n", - dev->name,__FUNCTION__,bytesInThisBuffer); + dev->name,__func__,bytesInThisBuffer); dumpit((char *)p_this_ccw, sizeof(struct ccwbk)); dumpit((char *)p_this_ccw->p_buffer, 64); #endif @@ -1998,7 +1998,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid) #ifdef IOTRACE printk(KERN_INFO "%s: %s() > Dump Active CCW chain \n", - dev->name,__FUNCTION__); + dev->name,__func__); p_buf=privptr->p_write_active_first; while (p_buf!=NULL) { dumpit((char *)p_buf, sizeof(struct ccwbk)); @@ -2018,7 +2018,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid) /* if write free count is zero , set NOBUFFER */ #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() > free_count is %d\n", - dev->name,__FUNCTION__, + dev->name,__func__, (int) privptr->write_free_count ); #endif if (privptr->write_free_count==0) { @@ -2029,7 +2029,7 @@ Done2: Done: #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > exit on line %d, rc = %d \n", - dev->name,__FUNCTION__,__LINE__, rc); + dev->name,__func__,__LINE__, rc); #endif return(rc); } /* end of claw_hw_tx */ @@ -2063,7 +2063,7 @@ init_ccw_bk(struct net_device *dev) addr_t real_TIC_address; int i,j; #ifdef FUNCTRACE - printk(KERN_INFO "%s: %s() enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s: %s() enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"init_ccw"); #ifdef DEBUGMSG @@ -2097,15 +2097,15 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() " "ccw_blocks_required=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, ccw_blocks_required); printk(KERN_INFO "%s: %s() " "PAGE_SIZE=0x%x\n", - dev->name,__FUNCTION__, + dev->name,__func__, (unsigned int)PAGE_SIZE); printk(KERN_INFO "%s: %s() > " "PAGE_MASK=0x%x\n", - dev->name,__FUNCTION__, + dev->name,__func__, (unsigned int)PAGE_MASK); #endif /* @@ -2117,10 +2117,10 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() > ccw_blocks_perpage=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, ccw_blocks_perpage); printk(KERN_INFO "%s: %s() > ccw_pages_required=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, ccw_pages_required); #endif /* @@ -2156,29 +2156,29 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG if (privptr->p_env->read_size < PAGE_SIZE) { printk(KERN_INFO "%s: %s() reads_perpage=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, claw_reads_perpage); } else { printk(KERN_INFO "%s: %s() pages_perread=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, privptr->p_buff_pages_perread); } printk(KERN_INFO "%s: %s() read_pages=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, claw_read_pages); if (privptr->p_env->write_size < PAGE_SIZE) { printk(KERN_INFO "%s: %s() writes_perpage=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, claw_writes_perpage); } else { printk(KERN_INFO "%s: %s() pages_perwrite=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, privptr->p_buff_pages_perwrite); } printk(KERN_INFO "%s: %s() write_pages=%d\n", - dev->name,__FUNCTION__, + dev->name,__func__, claw_write_pages); #endif @@ -2194,12 +2194,12 @@ init_ccw_bk(struct net_device *dev) printk(KERN_INFO "%s: %s() " "__get_free_pages for CCWs failed : " "pages is %d\n", - dev->name,__FUNCTION__, + dev->name,__func__, ccw_pages_required ); #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > " "exit on line %d, rc = ENOMEM\n", - dev->name,__FUNCTION__, + dev->name,__func__, __LINE__); #endif return -ENOMEM; @@ -2218,7 +2218,7 @@ init_ccw_bk(struct net_device *dev) /* Initialize ending CCW block */ #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() begin initialize ending CCW blocks\n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif p_endccw=privptr->p_end_ccw; @@ -2276,7 +2276,7 @@ init_ccw_bk(struct net_device *dev) #ifdef IOTRACE printk(KERN_INFO "%s: %s() dump claw ending CCW BK \n", - dev->name,__FUNCTION__); + dev->name,__func__); dumpit((char *)p_endccw, sizeof(struct endccw)); #endif @@ -2287,7 +2287,7 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() Begin build a chain of CCW buffer \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif p_buff=privptr->p_buff_ccw; @@ -2306,7 +2306,7 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() " "End build a chain of CCW buffer \n", - dev->name,__FUNCTION__); + dev->name,__func__); p_buf=p_free_chain; while (p_buf!=NULL) { dumpit((char *)p_buf, sizeof(struct ccwbk)); @@ -2321,7 +2321,7 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() " "Begin initialize ClawSignalBlock \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif if (privptr->p_claw_signal_blk==NULL) { privptr->p_claw_signal_blk=p_free_chain; @@ -2334,7 +2334,7 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() > End initialize " "ClawSignalBlock\n", - dev->name,__FUNCTION__); + dev->name,__func__); dumpit((char *)privptr->p_claw_signal_blk, sizeof(struct ccwbk)); #endif @@ -2349,14 +2349,14 @@ init_ccw_bk(struct net_device *dev) if (privptr->p_buff_write==NULL) { printk(KERN_INFO "%s: %s() __get_free_pages for write" " bufs failed : get is for %d pages\n", - dev->name,__FUNCTION__,claw_write_pages ); + dev->name,__func__,claw_write_pages ); free_pages((unsigned long)privptr->p_buff_ccw, (int)pages_to_order_of_mag(privptr->p_buff_ccw_num)); privptr->p_buff_ccw=NULL; #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > exit on line %d," "rc = ENOMEM\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return -ENOMEM; } @@ -2369,7 +2369,7 @@ init_ccw_bk(struct net_device *dev) ccw_pages_required * PAGE_SIZE); #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() Begin build claw write free " - "chain \n",dev->name,__FUNCTION__); + "chain \n",dev->name,__func__); #endif privptr->p_write_free_chain=NULL; @@ -2409,14 +2409,14 @@ init_ccw_bk(struct net_device *dev) #ifdef IOTRACE printk(KERN_INFO "%s:%s __get_free_pages " "for writes buf: get for %d pages\n", - dev->name,__FUNCTION__, + dev->name,__func__, privptr->p_buff_pages_perwrite); #endif if (p_buff==NULL) { printk(KERN_INFO "%s:%s __get_free_pages " "for writes buf failed : get is for %d pages\n", dev->name, - __FUNCTION__, + __func__, privptr->p_buff_pages_perwrite ); free_pages((unsigned long)privptr->p_buff_ccw, (int)pages_to_order_of_mag( @@ -2433,7 +2433,7 @@ init_ccw_bk(struct net_device *dev) #ifdef FUNCTRACE printk(KERN_INFO "%s: %s exit on line %d, rc = ENOMEM\n", dev->name, - __FUNCTION__, + __func__, __LINE__); #endif return -ENOMEM; @@ -2466,7 +2466,7 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s:%s End build claw write free chain \n", - dev->name,__FUNCTION__); + dev->name,__func__); p_buf=privptr->p_write_free_chain; while (p_buf!=NULL) { dumpit((char *)p_buf, sizeof(struct ccwbk)); @@ -2485,7 +2485,7 @@ init_ccw_bk(struct net_device *dev) printk(KERN_INFO "%s: %s() " "__get_free_pages for read buf failed : " "get is for %d pages\n", - dev->name,__FUNCTION__,claw_read_pages ); + dev->name,__func__,claw_read_pages ); free_pages((unsigned long)privptr->p_buff_ccw, (int)pages_to_order_of_mag( privptr->p_buff_ccw_num)); @@ -2497,7 +2497,7 @@ init_ccw_bk(struct net_device *dev) privptr->p_buff_write=NULL; #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > exit on line %d, rc =" - " ENOMEM\n",dev->name,__FUNCTION__,__LINE__); + " ENOMEM\n",dev->name,__func__,__LINE__); #endif return -ENOMEM; } @@ -2509,7 +2509,7 @@ init_ccw_bk(struct net_device *dev) */ #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() Begin build claw read free chain \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif p_buff=privptr->p_buff_read; for (i=0 ; i< privptr->p_env->read_buffers ; i++) { @@ -2590,7 +2590,7 @@ init_ccw_bk(struct net_device *dev) #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() Begin build claw read free chain \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif for (i=0 ; i< privptr->p_env->read_buffers ; i++) { p_buff = (void *)__get_free_pages(__GFP_DMA, @@ -2598,7 +2598,7 @@ init_ccw_bk(struct net_device *dev) if (p_buff==NULL) { printk(KERN_INFO "%s: %s() __get_free_pages for read " "buf failed : get is for %d pages\n", - dev->name,__FUNCTION__, + dev->name,__func__, privptr->p_buff_pages_perread ); free_pages((unsigned long)privptr->p_buff_ccw, (int)pages_to_order_of_mag(privptr->p_buff_ccw_num)); @@ -2622,7 +2622,7 @@ init_ccw_bk(struct net_device *dev) privptr->p_buff_write=NULL; #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() exit on line %d, rc = ENOMEM\n", - dev->name,__FUNCTION__, + dev->name,__func__, __LINE__); #endif return -ENOMEM; @@ -2695,7 +2695,7 @@ init_ccw_bk(struct net_device *dev) } /* pBuffread = NULL */ #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() > End build claw read free chain \n", - dev->name,__FUNCTION__); + dev->name,__func__); p_buf=p_first_CCWB; while (p_buf!=NULL) { dumpit((char *)p_buf, sizeof(struct ccwbk)); @@ -2707,7 +2707,7 @@ init_ccw_bk(struct net_device *dev) privptr->buffs_alloc = 1; #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return 0; } /* end of init_ccw_bk */ @@ -2723,11 +2723,11 @@ probe_error( struct ccwgroup_device *cgdev) { struct claw_privbk *privptr; #ifdef FUNCTRACE - printk(KERN_INFO "%s enter \n",__FUNCTION__); + printk(KERN_INFO "%s enter \n",__func__); #endif CLAW_DBF_TEXT(4,trace,"proberr"); #ifdef DEBUGMSG - printk(KERN_INFO "%s variable cgdev =\n",__FUNCTION__); + printk(KERN_INFO "%s variable cgdev =\n",__func__); dumpit((char *) cgdev, sizeof(struct ccwgroup_device)); #endif privptr=(struct claw_privbk *)cgdev->dev.driver_data; @@ -2741,7 +2741,7 @@ probe_error( struct ccwgroup_device *cgdev) } #ifdef FUNCTRACE printk(KERN_INFO "%s > exit on line %d\n", - __FUNCTION__,__LINE__); + __func__,__LINE__); #endif return; @@ -2772,7 +2772,7 @@ claw_process_control( struct net_device *dev, struct ccwbk * p_ccw) struct chbk *p_ch = NULL; #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > enter \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"clw_cntl"); #ifdef DEBUGMSG @@ -2794,7 +2794,7 @@ claw_process_control( struct net_device *dev, struct ccwbk * p_ccw) #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() > " "exit on line %d, rc=0\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return 0; } @@ -3057,7 +3057,7 @@ claw_process_control( struct net_device *dev, struct ccwbk * p_ccw) #ifdef FUNCTRACE printk(KERN_INFO "%s: %s() exit on line %d, rc = 0\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return 0; @@ -3080,7 +3080,7 @@ claw_send_control(struct net_device *dev, __u8 type, __u8 link, struct sk_buff *skb; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s > enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s > enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"sndcntl"); #ifdef DEBUGMSG @@ -3143,10 +3143,10 @@ claw_send_control(struct net_device *dev, __u8 type, __u8 link, skb = dev_alloc_skb(sizeof(struct clawctl)); if (!skb) { printk( "%s:%s low on mem, returning...\n", - dev->name,__FUNCTION__); + dev->name,__func__); #ifdef DEBUG printk(KERN_INFO "%s:%s Exit, rc = ENOMEM\n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif return -ENOMEM; } @@ -3162,7 +3162,7 @@ claw_send_control(struct net_device *dev, __u8 type, __u8 link, claw_hw_tx(skb, dev, 0); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return 0; @@ -3180,7 +3180,7 @@ claw_snd_conn_req(struct net_device *dev, __u8 link) struct clawctl *p_ctl; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"snd_conn"); #ifdef DEBUGMSG @@ -3193,7 +3193,7 @@ claw_snd_conn_req(struct net_device *dev, __u8 link) if ( privptr->system_validate_comp==0x00 ) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc = 1\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return rc; } @@ -3209,7 +3209,7 @@ claw_snd_conn_req(struct net_device *dev, __u8 link) HOST_APPL_NAME, privptr->p_env->api_type); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc = %d\n", - dev->name,__FUNCTION__,__LINE__, rc); + dev->name,__func__,__LINE__, rc); #endif return rc; @@ -3228,7 +3228,7 @@ claw_snd_disc(struct net_device *dev, struct clawctl * p_ctl) struct conncmd * p_connect; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"snd_dsc"); #ifdef DEBUGMSG @@ -3244,7 +3244,7 @@ claw_snd_disc(struct net_device *dev, struct clawctl * p_ctl) p_connect->host_name, p_connect->WS_name); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc = %d\n", - dev->name,__FUNCTION__, __LINE__, rc); + dev->name,__func__, __LINE__, rc); #endif return rc; } /* end of claw_snd_disc */ @@ -3265,7 +3265,7 @@ claw_snd_sys_validate_rsp(struct net_device *dev, #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Enter\n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"chkresp"); #ifdef DEBUGMSG @@ -3285,7 +3285,7 @@ claw_snd_sys_validate_rsp(struct net_device *dev, p_env->adapter_name ); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc = %d\n", - dev->name,__FUNCTION__,__LINE__, rc); + dev->name,__func__,__LINE__, rc); #endif return rc; } /* end of claw_snd_sys_validate_rsp */ @@ -3301,7 +3301,7 @@ claw_strt_conn_req(struct net_device *dev ) int rc; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"conn_req"); #ifdef DEBUGMSG @@ -3311,7 +3311,7 @@ claw_strt_conn_req(struct net_device *dev ) rc=claw_snd_conn_req(dev, 1); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d, rc = %d\n", - dev->name,__FUNCTION__,__LINE__, rc); + dev->name,__func__,__LINE__, rc); #endif return rc; } /* end of claw_strt_conn_req */ @@ -3327,13 +3327,13 @@ net_device_stats *claw_stats(struct net_device *dev) { struct claw_privbk *privptr; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"stats"); privptr = dev->priv; #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return &privptr->stats; } /* end of claw_stats */ @@ -3366,7 +3366,7 @@ unpack_read(struct net_device *dev ) int p=0; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s enter \n",dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"unpkread"); p_first_ccw=NULL; @@ -3408,7 +3408,7 @@ unpack_read(struct net_device *dev ) if ((p_this_ccw->header.opcode & MORE_to_COME_FLAG)!=0) { #ifdef DEBUGMSG printk(KERN_INFO "%s: %s > More_to_come is ON\n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif mtc_this_frm=1; if (p_this_ccw->header.length!= @@ -3435,7 +3435,7 @@ unpack_read(struct net_device *dev ) #ifdef DEBUGMSG printk(KERN_INFO "%s:%s goto next " "frame from MoretoComeSkip \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif goto NextFrame; } @@ -3445,7 +3445,7 @@ unpack_read(struct net_device *dev ) #ifdef DEBUGMSG printk(KERN_INFO "%s:%s goto next " "frame from claw_process_control \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif CLAW_DBF_TEXT(4,trace,"UnpkCntl"); goto NextFrame; @@ -3468,7 +3468,7 @@ unpack_next: if (privptr->mtc_logical_link<0) { #ifdef DEBUGMSG printk(KERN_INFO "%s: %s mtc_logical_link < 0 \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif /* @@ -3487,7 +3487,7 @@ unpack_next: printk(KERN_INFO "%s: %s > goto next " "frame from MoretoComeSkip \n", dev->name, - __FUNCTION__); + __func__); printk(KERN_INFO " bytes_to_mov %d > (MAX_ENVELOPE_" "SIZE-privptr->mtc_offset %d)\n", bytes_to_mov,(MAX_ENVELOPE_SIZE- privptr->mtc_offset)); @@ -3505,13 +3505,13 @@ unpack_next: } #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() received data \n", - dev->name,__FUNCTION__); + dev->name,__func__); if (p_env->packing == DO_PACKED) dumpit((char *)p_packd+sizeof(struct clawph),32); else dumpit((char *)p_this_ccw->p_buffer, 32); printk(KERN_INFO "%s: %s() bytelength %d \n", - dev->name,__FUNCTION__,bytes_to_mov); + dev->name,__func__,bytes_to_mov); #endif if (mtc_this_frm==0) { len_of_data=privptr->mtc_offset+bytes_to_mov; @@ -3530,13 +3530,13 @@ unpack_next: #ifdef DEBUGMSG printk(KERN_INFO "%s: %s() netif_" "rx(skb) completed \n", - dev->name,__FUNCTION__); + dev->name,__func__); #endif } else { privptr->stats.rx_dropped++; printk(KERN_WARNING "%s: %s() low on memory\n", - dev->name,__FUNCTION__); + dev->name,__func__); } privptr->mtc_offset=0; privptr->mtc_logical_link=-1; @@ -3575,10 +3575,10 @@ NextFrame: #ifdef IOTRACE printk(KERN_INFO "%s:%s processed frame is %d \n", - dev->name,__FUNCTION__,i); + dev->name,__func__,i); printk(KERN_INFO "%s:%s F:%lx L:%lx\n", dev->name, - __FUNCTION__, + __func__, (unsigned long)p_first_ccw, (unsigned long)p_last_ccw); #endif @@ -3588,7 +3588,7 @@ NextFrame: claw_strt_read(dev, LOCK_YES); #ifdef FUNCTRACE printk(KERN_INFO "%s: %s exit on line %d\n", - dev->name, __FUNCTION__, __LINE__); + dev->name, __func__, __LINE__); #endif return; } /* end of unpack_read */ @@ -3610,7 +3610,7 @@ claw_strt_read (struct net_device *dev, int lock ) p_ch=&privptr->channel[READ]; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter \n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter \n",dev->name,__func__); printk(KERN_INFO "%s: variable lock = %d, dev =\n",dev->name, lock); dumpit((char *) dev, sizeof(struct net_device)); #endif @@ -3626,7 +3626,7 @@ claw_strt_read (struct net_device *dev, int lock ) } #ifdef DEBUGMSG printk(KERN_INFO "%s:%s state-%02x\n" , - dev->name,__FUNCTION__, p_ch->claw_state); + dev->name,__func__, p_ch->claw_state); #endif if (lock==LOCK_YES) { spin_lock_irqsave(get_ccwdev_lock(p_ch->cdev), saveflags); @@ -3634,7 +3634,7 @@ claw_strt_read (struct net_device *dev, int lock ) if (test_and_set_bit(0, (void *)&p_ch->IO_active) == 0) { #ifdef DEBUGMSG printk(KERN_INFO "%s: HOT READ started in %s\n" , - dev->name,__FUNCTION__); + dev->name,__func__); p_clawh=(struct clawh *)privptr->p_claw_signal_blk; dumpit((char *)&p_clawh->flag , 1); #endif @@ -3650,7 +3650,7 @@ claw_strt_read (struct net_device *dev, int lock ) else { #ifdef DEBUGMSG printk(KERN_INFO "%s: No READ started by %s() In progress\n" , - dev->name,__FUNCTION__); + dev->name,__func__); #endif CLAW_DBF_TEXT(2,trace,"ReadAct"); } @@ -3660,7 +3660,7 @@ claw_strt_read (struct net_device *dev, int lock ) } #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif CLAW_DBF_TEXT(4,trace,"StRdExit"); return; @@ -3681,7 +3681,7 @@ claw_strt_out_IO( struct net_device *dev ) struct ccwbk *p_first_ccw; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); #endif if (!dev) { return; @@ -3691,7 +3691,7 @@ claw_strt_out_IO( struct net_device *dev ) #ifdef DEBUGMSG printk(KERN_INFO "%s:%s state-%02x\n" , - dev->name,__FUNCTION__,p_ch->claw_state); + dev->name,__func__,p_ch->claw_state); #endif CLAW_DBF_TEXT(4,trace,"strt_io"); p_first_ccw=privptr->p_write_active_first; @@ -3701,14 +3701,14 @@ claw_strt_out_IO( struct net_device *dev ) if (p_first_ccw == NULL) { #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return; } if (test_and_set_bit(0, (void *)&p_ch->IO_active) == 0) { parm = (unsigned long) p_ch; #ifdef DEBUGMSG - printk(KERN_INFO "%s:%s do_io \n" ,dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s do_io \n" ,dev->name,__func__); dumpit((char *)p_first_ccw, sizeof(struct ccwbk)); #endif CLAW_DBF_TEXT(2,trace,"StWrtIO"); @@ -3721,7 +3721,7 @@ claw_strt_out_IO( struct net_device *dev ) dev->trans_start = jiffies; #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - dev->name,__FUNCTION__,__LINE__); + dev->name,__func__,__LINE__); #endif return; @@ -3745,7 +3745,7 @@ claw_free_wrt_buf( struct net_device *dev ) struct ccwbk*p_buf; #endif #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); printk(KERN_INFO "%s: free count = %d variable dev =\n", dev->name,privptr->write_free_count); #endif @@ -3798,7 +3798,7 @@ claw_free_wrt_buf( struct net_device *dev ) privptr->p_write_active_last=NULL; #ifdef DEBUGMSG printk(KERN_INFO "%s:%s p_write_" - "active_first==NULL\n",dev->name,__FUNCTION__); + "active_first==NULL\n",dev->name,__func__); #endif } #ifdef IOTRACE @@ -3819,7 +3819,7 @@ claw_free_wrt_buf( struct net_device *dev ) CLAW_DBF_TEXT_(4,trace,"FWC=%d",privptr->write_free_count); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d free_count =%d\n", - dev->name,__FUNCTION__, __LINE__,privptr->write_free_count); + dev->name,__func__, __LINE__,privptr->write_free_count); #endif return; } @@ -3833,7 +3833,7 @@ claw_free_netdevice(struct net_device * dev, int free_dev) { struct claw_privbk *privptr; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"free_dev"); @@ -3854,7 +3854,7 @@ claw_free_netdevice(struct net_device * dev, int free_dev) #endif CLAW_DBF_TEXT(2,setup,"feee_ok"); #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Exit\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Exit\n",dev->name,__func__); #endif } @@ -3867,13 +3867,13 @@ static void claw_init_netdevice(struct net_device * dev) { #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"init_dev"); CLAW_DBF_TEXT_(2,setup,"%s",dev->name); if (!dev) { printk(KERN_WARNING "claw:%s BAD Device exit line %d\n", - __FUNCTION__,__LINE__); + __func__,__LINE__); CLAW_DBF_TEXT(2,setup,"baddev"); return; } @@ -3889,7 +3889,7 @@ claw_init_netdevice(struct net_device * dev) dev->tx_queue_len = 1300; dev->flags = IFF_POINTOPOINT | IFF_NOARP; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Exit\n",dev->name,__FUNCTION__); + printk(KERN_INFO "%s:%s Exit\n",dev->name,__func__); #endif CLAW_DBF_TEXT(2,setup,"initok"); return; @@ -3909,7 +3909,7 @@ add_channel(struct ccw_device *cdev,int i,struct claw_privbk *privptr) struct ccw_dev_id dev_id; #ifdef FUNCTRACE - printk(KERN_INFO "%s:%s Enter\n",cdev->dev.bus_id,__FUNCTION__); + printk(KERN_INFO "%s:%s Enter\n",cdev->dev.bus_id,__func__); #endif CLAW_DBF_TEXT_(2,setup,"%s",cdev->dev.bus_id); privptr->channel[i].flag = i+1; /* Read is 1 Write is 2 */ @@ -3920,16 +3920,16 @@ add_channel(struct ccw_device *cdev,int i,struct claw_privbk *privptr) p_ch->devno = dev_id.devno; if ((p_ch->irb = kzalloc(sizeof (struct irb),GFP_KERNEL)) == NULL) { printk(KERN_WARNING "%s Out of memory in %s for irb\n", - p_ch->id,__FUNCTION__); + p_ch->id,__func__); #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - p_ch->id,__FUNCTION__,__LINE__); + p_ch->id,__func__,__LINE__); #endif return -ENOMEM; } #ifdef FUNCTRACE printk(KERN_INFO "%s:%s Exit on line %d\n", - cdev->dev.bus_id,__FUNCTION__,__LINE__); + cdev->dev.bus_id,__func__,__LINE__); #endif return 0; } @@ -3952,7 +3952,7 @@ claw_new_device(struct ccwgroup_device *cgdev) int ret; struct ccw_dev_id dev_id; - pr_debug("%s() called\n", __FUNCTION__); + pr_debug("%s() called\n", __func__); printk(KERN_INFO "claw: add for %s\n",cgdev->cdev[READ]->dev.bus_id); CLAW_DBF_TEXT(2,setup,"new_dev"); privptr = cgdev->dev.driver_data; @@ -3990,7 +3990,7 @@ claw_new_device(struct ccwgroup_device *cgdev) } dev = alloc_netdev(0,"claw%d",claw_init_netdevice); if (!dev) { - printk(KERN_WARNING "%s:alloc_netdev failed\n",__FUNCTION__); + printk(KERN_WARNING "%s:alloc_netdev failed\n",__func__); goto out; } dev->priv = privptr; @@ -4065,7 +4065,7 @@ claw_shutdown_device(struct ccwgroup_device *cgdev) struct net_device *ndev; int ret; - pr_debug("%s() called\n", __FUNCTION__); + pr_debug("%s() called\n", __func__); CLAW_DBF_TEXT_(2,setup,"%s",cgdev->dev.bus_id); priv = cgdev->dev.driver_data; if (!priv) @@ -4095,15 +4095,15 @@ claw_remove_device(struct ccwgroup_device *cgdev) { struct claw_privbk *priv; - pr_debug("%s() called\n", __FUNCTION__); + pr_debug("%s() called\n", __func__); CLAW_DBF_TEXT_(2,setup,"%s",cgdev->dev.bus_id); priv = cgdev->dev.driver_data; if (!priv) { - printk(KERN_WARNING "claw: %s() no Priv exiting\n",__FUNCTION__); + printk(KERN_WARNING "claw: %s() no Priv exiting\n",__func__); return; } printk(KERN_INFO "claw: %s() called %s will be removed.\n", - __FUNCTION__,cgdev->cdev[0]->dev.bus_id); + __func__,cgdev->cdev[0]->dev.bus_id); if (cgdev->state == CCWGROUP_ONLINE) claw_shutdown_device(cgdev); claw_remove_files(&cgdev->dev); @@ -4346,7 +4346,7 @@ static struct attribute_group claw_attr_group = { static int claw_add_files(struct device *dev) { - pr_debug("%s() called\n", __FUNCTION__); + pr_debug("%s() called\n", __func__); CLAW_DBF_TEXT(2,setup,"add_file"); return sysfs_create_group(&dev->kobj, &claw_attr_group); } @@ -4354,7 +4354,7 @@ claw_add_files(struct device *dev) static void claw_remove_files(struct device *dev) { - pr_debug("%s() called\n", __FUNCTION__); + pr_debug("%s() called\n", __func__); CLAW_DBF_TEXT(2,setup,"rem_file"); sysfs_remove_group(&dev->kobj, &claw_attr_group); } @@ -4385,12 +4385,12 @@ claw_init(void) printk(KERN_INFO "claw: starting driver\n"); #ifdef FUNCTRACE - printk(KERN_INFO "claw: %s() enter \n",__FUNCTION__); + printk(KERN_INFO "claw: %s() enter \n",__func__); #endif ret = claw_register_debug_facility(); if (ret) { printk(KERN_WARNING "claw: %s() debug_register failed %d\n", - __FUNCTION__,ret); + __func__,ret); return ret; } CLAW_DBF_TEXT(2,setup,"init_mod"); @@ -4398,10 +4398,10 @@ claw_init(void) if (ret) { claw_unregister_debug_facility(); printk(KERN_WARNING "claw; %s() cu3088 register failed %d\n", - __FUNCTION__,ret); + __func__,ret); } #ifdef FUNCTRACE - printk(KERN_INFO "claw: %s() exit \n",__FUNCTION__); + printk(KERN_INFO "claw: %s() exit \n",__func__); #endif return ret; } diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c index 874a19994489..8f876f6ab367 100644 --- a/drivers/s390/net/netiucv.c +++ b/drivers/s390/net/netiucv.c @@ -670,7 +670,7 @@ static void conn_action_rx(fsm_instance *fi, int event, void *arg) struct netiucv_priv *privptr = netdev_priv(conn->netdev); int rc; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); if (!conn->netdev) { iucv_message_reject(conn->path, msg); @@ -718,7 +718,7 @@ static void conn_action_txdone(fsm_instance *fi, int event, void *arg) struct ll_header header; int rc; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); if (conn && conn->netdev) privptr = netdev_priv(conn->netdev); @@ -799,7 +799,7 @@ static void conn_action_connaccept(fsm_instance *fi, int event, void *arg) struct netiucv_priv *privptr = netdev_priv(netdev); int rc; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); conn->path = path; path->msglim = NETIUCV_QUEUELEN_DEFAULT; @@ -821,7 +821,7 @@ static void conn_action_connreject(fsm_instance *fi, int event, void *arg) struct iucv_event *ev = arg; struct iucv_path *path = ev->data; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); iucv_path_sever(path, NULL); } @@ -831,7 +831,7 @@ static void conn_action_connack(fsm_instance *fi, int event, void *arg) struct net_device *netdev = conn->netdev; struct netiucv_priv *privptr = netdev_priv(netdev); - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); fsm_deltimer(&conn->timer); fsm_newstate(fi, CONN_STATE_IDLE); netdev->tx_queue_len = conn->path->msglim; @@ -842,7 +842,7 @@ static void conn_action_conntimsev(fsm_instance *fi, int event, void *arg) { struct iucv_connection *conn = arg; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); fsm_deltimer(&conn->timer); iucv_path_sever(conn->path, NULL); fsm_newstate(fi, CONN_STATE_STARTWAIT); @@ -854,7 +854,7 @@ static void conn_action_connsever(fsm_instance *fi, int event, void *arg) struct net_device *netdev = conn->netdev; struct netiucv_priv *privptr = netdev_priv(netdev); - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); fsm_deltimer(&conn->timer); iucv_path_sever(conn->path, NULL); @@ -870,7 +870,7 @@ static void conn_action_start(fsm_instance *fi, int event, void *arg) struct iucv_connection *conn = arg; int rc; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); fsm_newstate(fi, CONN_STATE_STARTWAIT); PRINT_DEBUG("%s('%s'): connecting ...\n", @@ -948,7 +948,7 @@ static void conn_action_stop(fsm_instance *fi, int event, void *arg) struct net_device *netdev = conn->netdev; struct netiucv_priv *privptr = netdev_priv(netdev); - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); fsm_deltimer(&conn->timer); fsm_newstate(fi, CONN_STATE_STOPPED); @@ -1024,7 +1024,7 @@ static void dev_action_start(fsm_instance *fi, int event, void *arg) struct net_device *dev = arg; struct netiucv_priv *privptr = netdev_priv(dev); - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); fsm_newstate(fi, DEV_STATE_STARTWAIT); fsm_event(privptr->conn->fsm, CONN_EVENT_START, privptr->conn); @@ -1044,7 +1044,7 @@ dev_action_stop(fsm_instance *fi, int event, void *arg) struct netiucv_priv *privptr = netdev_priv(dev); struct iucv_event ev; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); ev.conn = privptr->conn; @@ -1066,7 +1066,7 @@ dev_action_connup(fsm_instance *fi, int event, void *arg) struct net_device *dev = arg; struct netiucv_priv *privptr = netdev_priv(dev); - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); switch (fsm_getstate(fi)) { case DEV_STATE_STARTWAIT: @@ -1097,7 +1097,7 @@ dev_action_connup(fsm_instance *fi, int event, void *arg) static void dev_action_conndown(fsm_instance *fi, int event, void *arg) { - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); switch (fsm_getstate(fi)) { case DEV_STATE_RUNNING: @@ -1288,7 +1288,7 @@ static int netiucv_tx(struct sk_buff *skb, struct net_device *dev) struct netiucv_priv *privptr = netdev_priv(dev); int rc; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); /** * Some sanity checks ... */ @@ -1344,7 +1344,7 @@ static struct net_device_stats *netiucv_stats (struct net_device * dev) { struct netiucv_priv *priv = netdev_priv(dev); - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return &priv->stats; } @@ -1360,7 +1360,7 @@ static struct net_device_stats *netiucv_stats (struct net_device * dev) */ static int netiucv_change_mtu(struct net_device * dev, int new_mtu) { - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (new_mtu < 576 || new_mtu > NETIUCV_MTU_MAX) { IUCV_DBF_TEXT(setup, 2, "given MTU out of valid range\n"); return -EINVAL; @@ -1378,7 +1378,7 @@ static ssize_t user_show(struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%s\n", netiucv_printname(priv->conn->userid)); } @@ -1393,7 +1393,7 @@ static ssize_t user_write(struct device *dev, struct device_attribute *attr, int i; struct iucv_connection *cp; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (count > 9) { PRINT_WARN("netiucv: username too long (%d)!\n", (int) count); IUCV_DBF_TEXT_(setup, 2, @@ -1449,7 +1449,7 @@ static ssize_t buffer_show (struct device *dev, struct device_attribute *attr, char *buf) { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%d\n", priv->conn->max_buffsize); } @@ -1461,7 +1461,7 @@ static ssize_t buffer_write (struct device *dev, struct device_attribute *attr, char *e; int bs1; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (count >= 39) return -EINVAL; @@ -1513,7 +1513,7 @@ static ssize_t dev_fsm_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%s\n", fsm_getstate_str(priv->fsm)); } @@ -1524,7 +1524,7 @@ static ssize_t conn_fsm_show (struct device *dev, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%s\n", fsm_getstate_str(priv->conn->fsm)); } @@ -1535,7 +1535,7 @@ static ssize_t maxmulti_show (struct device *dev, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.maxmulti); } @@ -1545,7 +1545,7 @@ static ssize_t maxmulti_write (struct device *dev, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.maxmulti = 0; return count; } @@ -1557,7 +1557,7 @@ static ssize_t maxcq_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.maxcqueue); } @@ -1566,7 +1566,7 @@ static ssize_t maxcq_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.maxcqueue = 0; return count; } @@ -1578,7 +1578,7 @@ static ssize_t sdoio_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.doios_single); } @@ -1587,7 +1587,7 @@ static ssize_t sdoio_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.doios_single = 0; return count; } @@ -1599,7 +1599,7 @@ static ssize_t mdoio_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.doios_multi); } @@ -1608,7 +1608,7 @@ static ssize_t mdoio_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); priv->conn->prof.doios_multi = 0; return count; } @@ -1620,7 +1620,7 @@ static ssize_t txlen_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.txlen); } @@ -1629,7 +1629,7 @@ static ssize_t txlen_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.txlen = 0; return count; } @@ -1641,7 +1641,7 @@ static ssize_t txtime_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.tx_time); } @@ -1650,7 +1650,7 @@ static ssize_t txtime_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.tx_time = 0; return count; } @@ -1662,7 +1662,7 @@ static ssize_t txpend_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.tx_pending); } @@ -1671,7 +1671,7 @@ static ssize_t txpend_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.tx_pending = 0; return count; } @@ -1683,7 +1683,7 @@ static ssize_t txmpnd_show (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 5, __FUNCTION__); + IUCV_DBF_TEXT(trace, 5, __func__); return sprintf(buf, "%ld\n", priv->conn->prof.tx_max_pending); } @@ -1692,7 +1692,7 @@ static ssize_t txmpnd_write (struct device *dev, struct device_attribute *attr, { struct netiucv_priv *priv = dev->driver_data; - IUCV_DBF_TEXT(trace, 4, __FUNCTION__); + IUCV_DBF_TEXT(trace, 4, __func__); priv->conn->prof.tx_max_pending = 0; return count; } @@ -1732,7 +1732,7 @@ static int netiucv_add_files(struct device *dev) { int ret; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); ret = sysfs_create_group(&dev->kobj, &netiucv_attr_group); if (ret) return ret; @@ -1744,7 +1744,7 @@ static int netiucv_add_files(struct device *dev) static void netiucv_remove_files(struct device *dev) { - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); sysfs_remove_group(&dev->kobj, &netiucv_stat_attr_group); sysfs_remove_group(&dev->kobj, &netiucv_attr_group); } @@ -1756,7 +1756,7 @@ static int netiucv_register_device(struct net_device *ndev) int ret; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (dev) { snprintf(dev->bus_id, BUS_ID_SIZE, "net%s", ndev->name); @@ -1792,7 +1792,7 @@ out_unreg: static void netiucv_unregister_device(struct device *dev) { - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); netiucv_remove_files(dev); device_unregister(dev); } @@ -1857,7 +1857,7 @@ out: */ static void netiucv_remove_connection(struct iucv_connection *conn) { - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); write_lock_bh(&iucv_connection_rwlock); list_del_init(&conn->list); write_unlock_bh(&iucv_connection_rwlock); @@ -1881,7 +1881,7 @@ static void netiucv_free_netdevice(struct net_device *dev) { struct netiucv_priv *privptr = netdev_priv(dev); - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (!dev) return; @@ -1963,7 +1963,7 @@ static ssize_t conn_write(struct device_driver *drv, struct netiucv_priv *priv; struct iucv_connection *cp; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (count>9) { PRINT_WARN("netiucv: username too long (%d)!\n", (int)count); IUCV_DBF_TEXT(setup, 2, "conn_write: too long\n"); @@ -2048,7 +2048,7 @@ static ssize_t remove_write (struct device_driver *drv, const char *p; int i; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); if (count >= IFNAMSIZ) count = IFNAMSIZ - 1;; @@ -2116,7 +2116,7 @@ static void __exit netiucv_exit(void) struct netiucv_priv *priv; struct device *dev; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); + IUCV_DBF_TEXT(trace, 3, __func__); while (!list_empty(&iucv_connection_list)) { cp = list_entry(iucv_connection_list.next, struct iucv_connection, list); @@ -2146,8 +2146,7 @@ static int __init netiucv_init(void) rc = iucv_register(&netiucv_handler, 1); if (rc) goto out_dbf; - IUCV_DBF_TEXT(trace, 3, __FUNCTION__); - netiucv_driver.groups = netiucv_drv_attr_groups; + IUCV_DBF_TEXT(trace, 3, __func__); rc = driver_register(&netiucv_driver); if (rc) { PRINT_ERR("NETIUCV: failed to register driver.\n"); diff --git a/drivers/s390/s390mach.c b/drivers/s390/s390mach.c index 644a06eba828..4d4b54277c43 100644 --- a/drivers/s390/s390mach.c +++ b/drivers/s390/s390mach.c @@ -59,15 +59,15 @@ repeat: printk(KERN_WARNING"%s: Code does not support more " "than two chained crws; please report to " - "linux390@de.ibm.com!\n", __FUNCTION__); + "linux390@de.ibm.com!\n", __func__); ccode = stcrw(&tmp_crw); printk(KERN_WARNING"%s: crw reports slct=%d, oflw=%d, " "chn=%d, rsc=%X, anc=%d, erc=%X, rsid=%X\n", - __FUNCTION__, tmp_crw.slct, tmp_crw.oflw, + __func__, tmp_crw.slct, tmp_crw.oflw, tmp_crw.chn, tmp_crw.rsc, tmp_crw.anc, tmp_crw.erc, tmp_crw.rsid); printk(KERN_WARNING"%s: This was crw number %x in the " - "chain\n", __FUNCTION__, chain); + "chain\n", __func__, chain); if (ccode != 0) break; chain = tmp_crw.chn ? chain + 1 : 0; @@ -83,7 +83,7 @@ repeat: crw[chain].rsid); /* Check for overflows. */ if (crw[chain].oflw) { - pr_debug("%s: crw overflow detected!\n", __FUNCTION__); + pr_debug("%s: crw overflow detected!\n", __func__); css_schedule_eval_all(); chain = 0; continue; diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h index 9e9f6c1e4e5d..45a7cd98c140 100644 --- a/drivers/s390/scsi/zfcp_def.h +++ b/drivers/s390/scsi/zfcp_def.h @@ -539,7 +539,7 @@ struct zfcp_rc_entry { /* logging routine for zfcp */ #define _ZFCP_LOG(fmt, args...) \ - printk(KERN_ERR ZFCP_NAME": %s(%d): " fmt, __FUNCTION__, \ + printk(KERN_ERR ZFCP_NAME": %s(%d): " fmt, __func__, \ __LINE__ , ##args) #define ZFCP_LOG(level, fmt, args...) \ -- cgit v1.2.3-59-g8ed1b From e1776856286bef076f400ec062b150b6f3c353cd Mon Sep 17 00:00:00 2001 From: Ursula Braun Date: Thu, 17 Apr 2008 07:46:22 +0200 Subject: [S390] qdio (new feature): enhancing info-retrieval from QDIO-adapters Next generation of OSA adapters allows retrieval of further self-describing infos. This is the preparational infrastructure patch for further exploitation in the qeth driver. Signed-off-by: Ursula Braun Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/cio/qdio.c | 178 +++++++++++++++++++++++------------------------- drivers/s390/cio/qdio.h | 28 ++++++++ 2 files changed, 114 insertions(+), 92 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/cio/qdio.c b/drivers/s390/cio/qdio.c index b1d18aa471c9..c359386708e9 100644 --- a/drivers/s390/cio/qdio.c +++ b/drivers/s390/cio/qdio.c @@ -2217,9 +2217,78 @@ qdio_synchronize(struct ccw_device *cdev, unsigned int flags, return cc; } +static int +qdio_get_ssqd_information(struct subchannel_id *schid, + struct qdio_chsc_ssqd **ssqd_area) +{ + int result; + + QDIO_DBF_TEXT0(0, setup, "getssqd"); + *ssqd_area = mempool_alloc(qdio_mempool_scssc, GFP_ATOMIC); + if (!ssqd_area) { + QDIO_PRINT_WARN("Could not get memory for chsc on sch x%x.\n", + schid->sch_no); + return -ENOMEM; + } + + (*ssqd_area)->request = (struct chsc_header) { + .length = 0x0010, + .code = 0x0024, + }; + (*ssqd_area)->first_sch = schid->sch_no; + (*ssqd_area)->last_sch = schid->sch_no; + (*ssqd_area)->ssid = schid->ssid; + result = chsc(*ssqd_area); + + if (result) { + QDIO_PRINT_WARN("CHSC returned cc %i on sch 0.%x.%x.\n", + result, schid->ssid, schid->sch_no); + goto out; + } + + if ((*ssqd_area)->response.code != QDIO_CHSC_RESPONSE_CODE_OK) { + QDIO_PRINT_WARN("CHSC response is 0x%x on sch 0.%x.%x.\n", + (*ssqd_area)->response.code, + schid->ssid, schid->sch_no); + goto out; + } + if (!((*ssqd_area)->flags & CHSC_FLAG_QDIO_CAPABILITY) || + !((*ssqd_area)->flags & CHSC_FLAG_VALIDITY) || + ((*ssqd_area)->sch != schid->sch_no)) { + QDIO_PRINT_WARN("huh? problems checking out sch 0.%x.%x... " \ + "using all SIGAs.\n", + schid->ssid, schid->sch_no); + goto out; + } + return 0; +out: + return -EINVAL; +} + +int +qdio_get_ssqd_pct(struct ccw_device *cdev) +{ + struct qdio_chsc_ssqd *ssqd_area; + struct subchannel_id schid; + char dbf_text[15]; + int rc; + int pct = 0; + + QDIO_DBF_TEXT0(0, setup, "getpct"); + schid = ccw_device_get_subchannel_id(cdev); + rc = qdio_get_ssqd_information(&schid, &ssqd_area); + if (!rc) + pct = (int)ssqd_area->pct; + if (rc != -ENOMEM) + mempool_free(ssqd_area, qdio_mempool_scssc); + sprintf(dbf_text, "pct: %d", pct); + QDIO_DBF_TEXT2(0, setup, dbf_text); + return pct; +} +EXPORT_SYMBOL(qdio_get_ssqd_pct); + static void -qdio_check_subchannel_qebsm(struct qdio_irq *irq_ptr, unsigned char qdioac, - unsigned long token) +qdio_check_subchannel_qebsm(struct qdio_irq *irq_ptr, unsigned long token) { struct qdio_q *q; int i; @@ -2227,7 +2296,7 @@ qdio_check_subchannel_qebsm(struct qdio_irq *irq_ptr, unsigned char qdioac, char dbf_text[15]; /*check if QEBSM is disabled */ - if (!(irq_ptr->is_qebsm) || !(qdioac & 0x01)) { + if (!(irq_ptr->is_qebsm) || !(irq_ptr->qdioac & 0x01)) { irq_ptr->is_qebsm = 0; irq_ptr->sch_token = 0; irq_ptr->qib.rflags &= ~QIB_RFLAGS_ENABLE_QEBSM; @@ -2256,102 +2325,27 @@ qdio_check_subchannel_qebsm(struct qdio_irq *irq_ptr, unsigned char qdioac, } static void -qdio_get_ssqd_information(struct qdio_irq *irq_ptr) +qdio_get_ssqd_siga(struct qdio_irq *irq_ptr) { - int result; - unsigned char qdioac; - struct { - struct chsc_header request; - u16 reserved1:10; - u16 ssid:2; - u16 fmt:4; - u16 first_sch; - u16 reserved2; - u16 last_sch; - u32 reserved3; - struct chsc_header response; - u32 reserved4; - u8 flags; - u8 reserved5; - u16 sch; - u8 qfmt; - u8 parm; - u8 qdioac1; - u8 sch_class; - u8 reserved7; - u8 icnt; - u8 reserved8; - u8 ocnt; - u8 reserved9; - u8 mbccnt; - u16 qdioac2; - u64 sch_token; - } *ssqd_area; + int rc; + struct qdio_chsc_ssqd *ssqd_area; QDIO_DBF_TEXT0(0,setup,"getssqd"); - qdioac = 0; - ssqd_area = mempool_alloc(qdio_mempool_scssc, GFP_ATOMIC); - if (!ssqd_area) { - QDIO_PRINT_WARN("Could not get memory for chsc. Using all " \ - "SIGAs for sch x%x.\n", irq_ptr->schid.sch_no); + irq_ptr->qdioac = 0; + rc = qdio_get_ssqd_information(&irq_ptr->schid, &ssqd_area); + if (rc) { + QDIO_PRINT_WARN("using all SIGAs for sch x%x.n", + irq_ptr->schid.sch_no); irq_ptr->qdioac = CHSC_FLAG_SIGA_INPUT_NECESSARY | CHSC_FLAG_SIGA_OUTPUT_NECESSARY | CHSC_FLAG_SIGA_SYNC_NECESSARY; /* all flags set */ irq_ptr->is_qebsm = 0; - irq_ptr->sch_token = 0; - irq_ptr->qib.rflags &= ~QIB_RFLAGS_ENABLE_QEBSM; - return; - } - - ssqd_area->request = (struct chsc_header) { - .length = 0x0010, - .code = 0x0024, - }; - ssqd_area->first_sch = irq_ptr->schid.sch_no; - ssqd_area->last_sch = irq_ptr->schid.sch_no; - ssqd_area->ssid = irq_ptr->schid.ssid; - result = chsc(ssqd_area); - - if (result) { - QDIO_PRINT_WARN("CHSC returned cc %i. Using all " \ - "SIGAs for sch 0.%x.%x.\n", result, - irq_ptr->schid.ssid, irq_ptr->schid.sch_no); - qdioac = CHSC_FLAG_SIGA_INPUT_NECESSARY | - CHSC_FLAG_SIGA_OUTPUT_NECESSARY | - CHSC_FLAG_SIGA_SYNC_NECESSARY; /* all flags set */ - irq_ptr->is_qebsm = 0; - goto out; - } + } else + irq_ptr->qdioac = ssqd_area->qdioac1; - if (ssqd_area->response.code != QDIO_CHSC_RESPONSE_CODE_OK) { - QDIO_PRINT_WARN("response upon checking SIGA needs " \ - "is 0x%x. Using all SIGAs for sch 0.%x.%x.\n", - ssqd_area->response.code, - irq_ptr->schid.ssid, irq_ptr->schid.sch_no); - qdioac = CHSC_FLAG_SIGA_INPUT_NECESSARY | - CHSC_FLAG_SIGA_OUTPUT_NECESSARY | - CHSC_FLAG_SIGA_SYNC_NECESSARY; /* all flags set */ - irq_ptr->is_qebsm = 0; - goto out; - } - if (!(ssqd_area->flags & CHSC_FLAG_QDIO_CAPABILITY) || - !(ssqd_area->flags & CHSC_FLAG_VALIDITY) || - (ssqd_area->sch != irq_ptr->schid.sch_no)) { - QDIO_PRINT_WARN("huh? problems checking out sch 0.%x.%x... " \ - "using all SIGAs.\n", - irq_ptr->schid.ssid, irq_ptr->schid.sch_no); - qdioac = CHSC_FLAG_SIGA_INPUT_NECESSARY | - CHSC_FLAG_SIGA_OUTPUT_NECESSARY | - CHSC_FLAG_SIGA_SYNC_NECESSARY; /* worst case */ - irq_ptr->is_qebsm = 0; - goto out; - } - qdioac = ssqd_area->qdioac1; -out: - qdio_check_subchannel_qebsm(irq_ptr, qdioac, - ssqd_area->sch_token); - mempool_free(ssqd_area, qdio_mempool_scssc); - irq_ptr->qdioac = qdioac; + qdio_check_subchannel_qebsm(irq_ptr, ssqd_area->sch_token); + if (rc != -ENOMEM) + mempool_free(ssqd_area, qdio_mempool_scssc); } static unsigned int @@ -3227,7 +3221,7 @@ qdio_establish(struct qdio_initialize *init_data) return -EIO; } - qdio_get_ssqd_information(irq_ptr); + qdio_get_ssqd_siga(irq_ptr); /* if this gets set once, we're running under VM and can omit SVSes */ if (irq_ptr->qdioac&CHSC_FLAG_SIGA_SYNC_NECESSARY) omit_svs=1; diff --git a/drivers/s390/cio/qdio.h b/drivers/s390/cio/qdio.h index da8a272fd75b..c3df6b2c38b7 100644 --- a/drivers/s390/cio/qdio.h +++ b/drivers/s390/cio/qdio.h @@ -406,6 +406,34 @@ do_clear_global_summary(void) #define CHSC_FLAG_SIGA_SYNC_DONE_ON_THININTS 0x08 #define CHSC_FLAG_SIGA_SYNC_DONE_ON_OUTB_PCIS 0x04 +struct qdio_chsc_ssqd { + struct chsc_header request; + u16 reserved1:10; + u16 ssid:2; + u16 fmt:4; + u16 first_sch; + u16 reserved2; + u16 last_sch; + u32 reserved3; + struct chsc_header response; + u32 reserved4; + u8 flags; + u8 reserved5; + u16 sch; + u8 qfmt; + u8 parm; + u8 qdioac1; + u8 sch_class; + u8 pct; + u8 icnt; + u8 reserved7; + u8 ocnt; + u8 reserved8; + u8 mbccnt; + u16 qdioac2; + u64 sch_token; +}; + struct qdio_perf_stats { #ifdef CONFIG_64BIT atomic64_t tl_runs; -- cgit v1.2.3-59-g8ed1b From 43ca5c3a1cefdaa09231d64485b8f676118bf1e0 Mon Sep 17 00:00:00 2001 From: Heiko Carstens Date: Thu, 17 Apr 2008 07:46:23 +0200 Subject: [S390] Convert monitor calls to function calls. Remove the program check generating monitor calls and use function calls instead. Theres is no real advantage in using monitor calls, but they do make debugging harder, because of all the program checks it generates. Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- arch/s390/kernel/process.c | 70 +++++++++++++++++++-------------------------- arch/s390/kernel/s390_ext.c | 4 +-- arch/s390/kernel/smp.c | 1 - arch/s390/kernel/traps.c | 2 -- drivers/s390/cio/cio.c | 3 +- include/asm-s390/cpu.h | 8 ++++++ 6 files changed, 42 insertions(+), 46 deletions(-) (limited to 'drivers/s390') diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index ce203154d8ce..eb768ce88672 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -76,6 +76,7 @@ unsigned long thread_saved_pc(struct task_struct *tsk) * Need to know about CPUs going idle? */ static ATOMIC_NOTIFIER_HEAD(idle_chain); +DEFINE_PER_CPU(struct s390_idle_data, s390_idle); int register_idle_notifier(struct notifier_block *nb) { @@ -89,9 +90,33 @@ int unregister_idle_notifier(struct notifier_block *nb) } EXPORT_SYMBOL(unregister_idle_notifier); -void do_monitor_call(struct pt_regs *regs, long interruption_code) +static int s390_idle_enter(void) +{ + struct s390_idle_data *idle; + int nr_calls = 0; + void *hcpu; + int rc; + + hcpu = (void *)(long)smp_processor_id(); + rc = __atomic_notifier_call_chain(&idle_chain, S390_CPU_IDLE, hcpu, -1, + &nr_calls); + if (rc == NOTIFY_BAD) { + nr_calls--; + __atomic_notifier_call_chain(&idle_chain, S390_CPU_NOT_IDLE, + hcpu, nr_calls, NULL); + return rc; + } + idle = &__get_cpu_var(s390_idle); + spin_lock(&idle->lock); + idle->idle_count++; + idle->in_idle = 1; + idle->idle_enter = get_clock(); + spin_unlock(&idle->lock); + return NOTIFY_OK; +} + +void s390_idle_leave(void) { -#ifdef CONFIG_SMP struct s390_idle_data *idle; idle = &__get_cpu_var(s390_idle); @@ -99,10 +124,6 @@ void do_monitor_call(struct pt_regs *regs, long interruption_code) idle->idle_time += get_clock() - idle->idle_enter; idle->in_idle = 0; spin_unlock(&idle->lock); -#endif - /* disable monitor call class 0 */ - __ctl_clear_bit(8, 15); - atomic_notifier_call_chain(&idle_chain, S390_CPU_NOT_IDLE, (void *)(long) smp_processor_id()); } @@ -113,61 +134,30 @@ extern void s390_handle_mcck(void); */ static void default_idle(void) { - int cpu, rc; - int nr_calls = 0; - void *hcpu; -#ifdef CONFIG_SMP - struct s390_idle_data *idle; -#endif - /* CPU is going idle. */ - cpu = smp_processor_id(); - hcpu = (void *)(long)cpu; local_irq_disable(); if (need_resched()) { local_irq_enable(); return; } - - rc = __atomic_notifier_call_chain(&idle_chain, S390_CPU_IDLE, hcpu, -1, - &nr_calls); - if (rc == NOTIFY_BAD) { - nr_calls--; - __atomic_notifier_call_chain(&idle_chain, S390_CPU_NOT_IDLE, - hcpu, nr_calls, NULL); + if (s390_idle_enter() == NOTIFY_BAD) { local_irq_enable(); return; } - - /* enable monitor call class 0 */ - __ctl_set_bit(8, 15); - #ifdef CONFIG_HOTPLUG_CPU - if (cpu_is_offline(cpu)) { + if (cpu_is_offline(smp_processor_id())) { preempt_enable_no_resched(); cpu_die(); } #endif - local_mcck_disable(); if (test_thread_flag(TIF_MCCK_PENDING)) { local_mcck_enable(); - /* disable monitor call class 0 */ - __ctl_clear_bit(8, 15); - atomic_notifier_call_chain(&idle_chain, S390_CPU_NOT_IDLE, - hcpu); + s390_idle_leave(); local_irq_enable(); s390_handle_mcck(); return; } -#ifdef CONFIG_SMP - idle = &__get_cpu_var(s390_idle); - spin_lock(&idle->lock); - idle->idle_count++; - idle->in_idle = 1; - idle->idle_enter = get_clock(); - spin_unlock(&idle->lock); -#endif trace_hardirqs_on(); /* Wait for external, I/O or machine check interrupt. */ __load_psw_mask(psw_kernel_bits | PSW_MASK_WAIT | diff --git a/arch/s390/kernel/s390_ext.c b/arch/s390/kernel/s390_ext.c index acf93dba7727..3a8772d3baea 100644 --- a/arch/s390/kernel/s390_ext.c +++ b/arch/s390/kernel/s390_ext.c @@ -13,7 +13,7 @@ #include #include #include - +#include #include #include #include @@ -119,7 +119,7 @@ void do_extint(struct pt_regs *regs, unsigned short code) old_regs = set_irq_regs(regs); irq_enter(); - asm volatile ("mc 0,0"); + s390_idle_check(); if (S390_lowcore.int_clock >= S390_lowcore.jiffy_timer) /** * Make sure that the i/o interrupt did not "overtake" diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index d1e8e8a3fb66..5a445b1b1217 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -73,7 +73,6 @@ static int smp_cpu_state[NR_CPUS]; static int cpu_management; static DEFINE_PER_CPU(struct cpu, cpu_devices); -DEFINE_PER_CPU(struct s390_idle_data, s390_idle); static void smp_ext_bitcall(int, ec_bit_sig); diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c index 60f728aeaf12..9452a205629b 100644 --- a/arch/s390/kernel/traps.c +++ b/arch/s390/kernel/traps.c @@ -59,7 +59,6 @@ int sysctl_userprocess_debug = 0; extern pgm_check_handler_t do_protection_exception; extern pgm_check_handler_t do_dat_exception; -extern pgm_check_handler_t do_monitor_call; extern pgm_check_handler_t do_asce_exception; #define stack_pointer ({ void **sp; asm("la %0,0(15)" : "=&d" (sp)); sp; }) @@ -739,6 +738,5 @@ void __init trap_init(void) pgm_check_table[0x15] = &operand_exception; pgm_check_table[0x1C] = &space_switch_exception; pgm_check_table[0x1D] = &hfp_sqrt_exception; - pgm_check_table[0x40] = &do_monitor_call; pfault_irq_init(); } diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c index 60590a12d529..6dbe9488d3f9 100644 --- a/drivers/s390/cio/cio.c +++ b/drivers/s390/cio/cio.c @@ -24,6 +24,7 @@ #include #include #include +#include #include "cio.h" #include "css.h" #include "chsc.h" @@ -649,7 +650,7 @@ do_IRQ (struct pt_regs *regs) old_regs = set_irq_regs(regs); irq_enter(); - asm volatile ("mc 0,0"); + s390_idle_check(); if (S390_lowcore.int_clock >= S390_lowcore.jiffy_timer) /** * Make sure that the i/o interrupt did not "overtake" diff --git a/include/asm-s390/cpu.h b/include/asm-s390/cpu.h index 352dde194f3c..e5a6a9ba3adf 100644 --- a/include/asm-s390/cpu.h +++ b/include/asm-s390/cpu.h @@ -22,4 +22,12 @@ struct s390_idle_data { DECLARE_PER_CPU(struct s390_idle_data, s390_idle); +void s390_idle_leave(void); + +static inline void s390_idle_check(void) +{ + if ((&__get_cpu_var(s390_idle))->in_idle) + s390_idle_leave(); +} + #endif /* _ASM_S390_CPU_H_ */ -- cgit v1.2.3-59-g8ed1b From 5a62b192196af9a798e2f2f4c6a1324e7edf2f4b Mon Sep 17 00:00:00 2001 From: Heiko Carstens Date: Thu, 17 Apr 2008 07:46:25 +0200 Subject: [S390] Convert s390 to GENERIC_CLOCKEVENTS. This way we get rid of s390's NO_IDLE_HZ and use the generic dynticks variant instead. In addition we get high resolution timers for free. Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- arch/s390/Kconfig | 24 +---- arch/s390/kernel/process.c | 4 +- arch/s390/kernel/s390_ext.c | 9 +- arch/s390/kernel/setup.c | 2 +- arch/s390/kernel/time.c | 256 +++++++++++++------------------------------- arch/s390/lib/delay.c | 14 +-- drivers/s390/cio/cio.c | 9 +- include/asm-s390/hardirq.h | 2 +- include/asm-s390/lowcore.h | 6 +- 9 files changed, 98 insertions(+), 228 deletions(-) (limited to 'drivers/s390') diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index da6ea64cc34d..f6a68e178fc5 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -43,6 +43,9 @@ config GENERIC_HWEIGHT config GENERIC_TIME def_bool y +config GENERIC_CLOCKEVENTS + def_bool y + config GENERIC_BUG bool depends on BUG @@ -73,6 +76,8 @@ menu "Base setup" comment "Processor type and features" +source "kernel/time/Kconfig" + config 64BIT bool "64 bit kernel" help @@ -487,25 +492,6 @@ config APPLDATA_NET_SUM source kernel/Kconfig.hz -config NO_IDLE_HZ - bool "No HZ timer ticks in idle" - help - Switches the regular HZ timer off when the system is going idle. - This helps z/VM to detect that the Linux system is idle. VM can - then "swap-out" this guest which reduces memory usage. It also - reduces the overhead of idle systems. - - The HZ timer can be switched on/off via /proc/sys/kernel/hz_timer. - hz_timer=0 means HZ timer is disabled. hz_timer=1 means HZ - timer is active. - -config NO_IDLE_HZ_INIT - bool "HZ timer in idle off by default" - depends on NO_IDLE_HZ - help - The HZ timer is switched off in idle by default. That means the - HZ timer is already disabled at boot time. - config S390_HYPFS_FS bool "s390 hypervisor file system support" select SYS_HYPERVISOR diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index eb768ce88672..df033249f6b1 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include #include @@ -167,9 +168,10 @@ static void default_idle(void) void cpu_idle(void) { for (;;) { + tick_nohz_stop_sched_tick(); while (!need_resched()) default_idle(); - + tick_nohz_restart_sched_tick(); preempt_enable_no_resched(); schedule(); preempt_disable(); diff --git a/arch/s390/kernel/s390_ext.c b/arch/s390/kernel/s390_ext.c index 3a8772d3baea..947d8c74403b 100644 --- a/arch/s390/kernel/s390_ext.c +++ b/arch/s390/kernel/s390_ext.c @@ -120,12 +120,9 @@ void do_extint(struct pt_regs *regs, unsigned short code) old_regs = set_irq_regs(regs); irq_enter(); s390_idle_check(); - if (S390_lowcore.int_clock >= S390_lowcore.jiffy_timer) - /** - * Make sure that the i/o interrupt did not "overtake" - * the last HZ timer interrupt. - */ - account_ticks(S390_lowcore.int_clock); + if (S390_lowcore.int_clock >= S390_lowcore.clock_comparator) + /* Serve timer interrupts first. */ + clock_comparator_work(); kstat_cpu(smp_processor_id()).irqs[EXTERNAL_INTERRUPT]++; index = ext_hash(code); for (p = ext_int_hash[index]; p; p = p->next) { diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index 22040d087d8a..7141147e6b63 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c @@ -428,7 +428,7 @@ setup_lowcore(void) lc->io_new_psw.mask = psw_kernel_bits; lc->io_new_psw.addr = PSW_ADDR_AMODE | (unsigned long) io_int_handler; lc->ipl_device = S390_lowcore.ipl_device; - lc->jiffy_timer = -1LL; + lc->clock_comparator = -1ULL; lc->kernel_stack = ((unsigned long) &init_thread_union) + THREAD_SIZE; lc->async_stack = (unsigned long) __alloc_bootmem(ASYNC_SIZE, ASYNC_SIZE, 0) + ASYNC_SIZE; diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c index 925f9dc0b0a0..17c4de9e1b6b 100644 --- a/arch/s390/kernel/time.c +++ b/arch/s390/kernel/time.c @@ -30,7 +30,7 @@ #include #include #include - +#include #include #include #include @@ -57,9 +57,9 @@ static ext_int_info_t ext_int_info_cc; static ext_int_info_t ext_int_etr_cc; -static u64 init_timer_cc; static u64 jiffies_timer_cc; -static u64 xtime_cc; + +static DEFINE_PER_CPU(struct clock_event_device, comparators); /* * Scheduler clock - returns current time in nanosec units. @@ -95,162 +95,40 @@ void tod_to_timeval(__u64 todval, struct timespec *xtime) #define s390_do_profile() do { ; } while(0) #endif /* CONFIG_PROFILING */ -/* - * Advance the per cpu tick counter up to the time given with the - * "time" argument. The per cpu update consists of accounting - * the virtual cpu time, calling update_process_times and calling - * the profiling hook. If xtime is before time it is advanced as well. - */ -void account_ticks(u64 time) +void clock_comparator_work(void) { - __u32 ticks; - __u64 tmp; - - /* Calculate how many ticks have passed. */ - if (time < S390_lowcore.jiffy_timer) - return; - tmp = time - S390_lowcore.jiffy_timer; - if (tmp >= 2*CLK_TICKS_PER_JIFFY) { /* more than two ticks ? */ - ticks = __div(tmp, CLK_TICKS_PER_JIFFY) + 1; - S390_lowcore.jiffy_timer += - CLK_TICKS_PER_JIFFY * (__u64) ticks; - } else if (tmp >= CLK_TICKS_PER_JIFFY) { - ticks = 2; - S390_lowcore.jiffy_timer += 2*CLK_TICKS_PER_JIFFY; - } else { - ticks = 1; - S390_lowcore.jiffy_timer += CLK_TICKS_PER_JIFFY; - } - -#ifdef CONFIG_SMP - /* - * Do not rely on the boot cpu to do the calls to do_timer. - * Spread it over all cpus instead. - */ - write_seqlock(&xtime_lock); - if (S390_lowcore.jiffy_timer > xtime_cc) { - __u32 xticks; - tmp = S390_lowcore.jiffy_timer - xtime_cc; - if (tmp >= 2*CLK_TICKS_PER_JIFFY) { - xticks = __div(tmp, CLK_TICKS_PER_JIFFY); - xtime_cc += (__u64) xticks * CLK_TICKS_PER_JIFFY; - } else { - xticks = 1; - xtime_cc += CLK_TICKS_PER_JIFFY; - } - do_timer(xticks); - } - write_sequnlock(&xtime_lock); -#else - do_timer(ticks); -#endif - - while (ticks--) - update_process_times(user_mode(get_irq_regs())); + struct clock_event_device *cd; + S390_lowcore.clock_comparator = -1ULL; + set_clock_comparator(S390_lowcore.clock_comparator); + cd = &__get_cpu_var(comparators); + cd->event_handler(cd); s390_do_profile(); } -#ifdef CONFIG_NO_IDLE_HZ - -#ifdef CONFIG_NO_IDLE_HZ_INIT -int sysctl_hz_timer = 0; -#else -int sysctl_hz_timer = 1; -#endif - -/* - * Stop the HZ tick on the current CPU. - * Only cpu_idle may call this function. - */ -static void stop_hz_timer(void) -{ - unsigned long flags; - unsigned long seq, next; - __u64 timer, todval; - int cpu = smp_processor_id(); - - if (sysctl_hz_timer != 0) - return; - - cpu_set(cpu, nohz_cpu_mask); - - /* - * Leave the clock comparator set up for the next timer - * tick if either rcu or a softirq is pending. - */ - if (rcu_needs_cpu(cpu) || local_softirq_pending()) { - cpu_clear(cpu, nohz_cpu_mask); - return; - } - - /* - * This cpu is going really idle. Set up the clock comparator - * for the next event. - */ - next = next_timer_interrupt(); - do { - seq = read_seqbegin_irqsave(&xtime_lock, flags); - timer = ((__u64) next) - ((__u64) jiffies) + jiffies_64; - } while (read_seqretry_irqrestore(&xtime_lock, seq, flags)); - todval = -1ULL; - /* Be careful about overflows. */ - if (timer < (-1ULL / CLK_TICKS_PER_JIFFY)) { - timer = jiffies_timer_cc + timer * CLK_TICKS_PER_JIFFY; - if (timer >= jiffies_timer_cc) - todval = timer; - } - set_clock_comparator(todval); -} - /* - * Start the HZ tick on the current CPU. - * Only cpu_idle may call this function. + * Fixup the clock comparator. */ -static void start_hz_timer(void) +static void fixup_clock_comparator(unsigned long long delta) { - if (!cpu_isset(smp_processor_id(), nohz_cpu_mask)) + /* If nobody is waiting there's nothing to fix. */ + if (S390_lowcore.clock_comparator == -1ULL) return; - account_ticks(get_clock()); - set_clock_comparator(S390_lowcore.jiffy_timer + CPU_DEVIATION); - cpu_clear(smp_processor_id(), nohz_cpu_mask); -} - -static int nohz_idle_notify(struct notifier_block *self, - unsigned long action, void *hcpu) -{ - switch (action) { - case S390_CPU_IDLE: - stop_hz_timer(); - break; - case S390_CPU_NOT_IDLE: - start_hz_timer(); - break; - } - return NOTIFY_OK; + S390_lowcore.clock_comparator += delta; + set_clock_comparator(S390_lowcore.clock_comparator); } -static struct notifier_block nohz_idle_nb = { - .notifier_call = nohz_idle_notify, -}; - -static void __init nohz_init(void) +static int s390_next_event(unsigned long delta, + struct clock_event_device *evt) { - if (register_idle_notifier(&nohz_idle_nb)) - panic("Couldn't register idle notifier"); + S390_lowcore.clock_comparator = get_clock() + delta; + set_clock_comparator(S390_lowcore.clock_comparator); + return 0; } -#endif - -/* - * Set up per cpu jiffy timer and set the clock comparator. - */ -static void setup_jiffy_timer(void) +static void s390_set_mode(enum clock_event_mode mode, + struct clock_event_device *evt) { - /* Set up clock comparator to next jiffy. */ - S390_lowcore.jiffy_timer = - jiffies_timer_cc + (jiffies_64 + 1) * CLK_TICKS_PER_JIFFY; - set_clock_comparator(S390_lowcore.jiffy_timer + CPU_DEVIATION); } /* @@ -259,7 +137,26 @@ static void setup_jiffy_timer(void) */ void init_cpu_timer(void) { - setup_jiffy_timer(); + struct clock_event_device *cd; + int cpu; + + S390_lowcore.clock_comparator = -1ULL; + set_clock_comparator(S390_lowcore.clock_comparator); + + cpu = smp_processor_id(); + cd = &per_cpu(comparators, cpu); + cd->name = "comparator"; + cd->features = CLOCK_EVT_FEAT_ONESHOT; + cd->mult = 16777; + cd->shift = 12; + cd->min_delta_ns = 1; + cd->max_delta_ns = LONG_MAX; + cd->rating = 400; + cd->cpumask = cpumask_of_cpu(cpu); + cd->set_next_event = s390_next_event; + cd->set_mode = s390_set_mode; + + clockevents_register_device(cd); /* Enable clock comparator timer interrupt. */ __ctl_set_bit(0,11); @@ -270,8 +167,6 @@ void init_cpu_timer(void) static void clock_comparator_interrupt(__u16 code) { - /* set clock comparator for next tick */ - set_clock_comparator(S390_lowcore.jiffy_timer + CPU_DEVIATION); } static void etr_reset(void); @@ -316,8 +211,9 @@ static struct clocksource clocksource_tod = { */ void __init time_init(void) { + u64 init_timer_cc; + init_timer_cc = reset_tod_clock(); - xtime_cc = init_timer_cc + CLK_TICKS_PER_JIFFY; jiffies_timer_cc = init_timer_cc - jiffies_64 * CLK_TICKS_PER_JIFFY; /* set xtime */ @@ -342,10 +238,6 @@ void __init time_init(void) /* Enable TOD clock interrupts on the boot cpu. */ init_cpu_timer(); -#ifdef CONFIG_NO_IDLE_HZ - nohz_init(); -#endif - #ifdef CONFIG_VIRT_TIMER vtime_init(); #endif @@ -699,53 +591,49 @@ static int etr_aib_follows(struct etr_aib *a1, struct etr_aib *a2, int p) } /* - * The time is "clock". xtime is what we think the time is. + * The time is "clock". old is what we think the time is. * Adjust the value by a multiple of jiffies and add the delta to ntp. * "delay" is an approximation how long the synchronization took. If * the time correction is positive, then "delay" is subtracted from * the time difference and only the remaining part is passed to ntp. */ -static void etr_adjust_time(unsigned long long clock, unsigned long long delay) +static unsigned long long etr_adjust_time(unsigned long long old, + unsigned long long clock, + unsigned long long delay) { unsigned long long delta, ticks; struct timex adjust; - /* - * We don't have to take the xtime lock because the cpu - * executing etr_adjust_time is running disabled in - * tasklet context and all other cpus are looping in - * etr_sync_cpu_start. - */ - if (clock > xtime_cc) { + if (clock > old) { /* It is later than we thought. */ - delta = ticks = clock - xtime_cc; + delta = ticks = clock - old; delta = ticks = (delta < delay) ? 0 : delta - delay; delta -= do_div(ticks, CLK_TICKS_PER_JIFFY); - init_timer_cc = init_timer_cc + delta; - jiffies_timer_cc = jiffies_timer_cc + delta; - xtime_cc = xtime_cc + delta; adjust.offset = ticks * (1000000 / HZ); } else { /* It is earlier than we thought. */ - delta = ticks = xtime_cc - clock; + delta = ticks = old - clock; delta -= do_div(ticks, CLK_TICKS_PER_JIFFY); - init_timer_cc = init_timer_cc - delta; - jiffies_timer_cc = jiffies_timer_cc - delta; - xtime_cc = xtime_cc - delta; + delta = -delta; adjust.offset = -ticks * (1000000 / HZ); } + jiffies_timer_cc += delta; if (adjust.offset != 0) { printk(KERN_NOTICE "etr: time adjusted by %li micro-seconds\n", adjust.offset); adjust.modes = ADJ_OFFSET_SINGLESHOT; do_adjtimex(&adjust); } + return delta; } +static struct { + int in_sync; + unsigned long long fixup_cc; +} etr_sync; + static void etr_sync_cpu_start(void *dummy) { - int *in_sync = dummy; - etr_enable_sync_clock(); /* * This looks like a busy wait loop but it isn't. etr_sync_cpus @@ -753,7 +641,7 @@ static void etr_sync_cpu_start(void *dummy) * __udelay will stop the cpu on an enabled wait psw until the * TOD is running again. */ - while (*in_sync == 0) { + while (etr_sync.in_sync == 0) { __udelay(1); /* * A different cpu changes *in_sync. Therefore use @@ -761,14 +649,14 @@ static void etr_sync_cpu_start(void *dummy) */ barrier(); } - if (*in_sync != 1) + if (etr_sync.in_sync != 1) /* Didn't work. Clear per-cpu in sync bit again. */ etr_disable_sync_clock(NULL); /* * This round of TOD syncing is done. Set the clock comparator * to the next tick and let the processor continue. */ - setup_jiffy_timer(); + fixup_clock_comparator(etr_sync.fixup_cc); } static void etr_sync_cpu_end(void *dummy) @@ -783,8 +671,8 @@ static void etr_sync_cpu_end(void *dummy) static int etr_sync_clock(struct etr_aib *aib, int port) { struct etr_aib *sync_port; - unsigned long long clock, delay; - int in_sync, follows; + unsigned long long clock, old_clock, delay, delta; + int follows; int rc; /* Check if the current aib is adjacent to the sync port aib. */ @@ -799,9 +687,9 @@ static int etr_sync_clock(struct etr_aib *aib, int port) * successfully synced the clock. smp_call_function will * return after all other cpus are in etr_sync_cpu_start. */ - in_sync = 0; + memset(&etr_sync, 0, sizeof(etr_sync)); preempt_disable(); - smp_call_function(etr_sync_cpu_start,&in_sync,0,0); + smp_call_function(etr_sync_cpu_start, NULL, 0, 0); local_irq_disable(); etr_enable_sync_clock(); @@ -809,6 +697,7 @@ static int etr_sync_clock(struct etr_aib *aib, int port) __ctl_set_bit(14, 21); __ctl_set_bit(0, 29); clock = ((unsigned long long) (aib->edf2.etv + 1)) << 32; + old_clock = get_clock(); if (set_clock(clock) == 0) { __udelay(1); /* Wait for the clock to start. */ __ctl_clear_bit(0, 29); @@ -817,16 +706,17 @@ static int etr_sync_clock(struct etr_aib *aib, int port) /* Adjust Linux timing variables. */ delay = (unsigned long long) (aib->edf2.etv - sync_port->edf2.etv) << 32; - etr_adjust_time(clock, delay); - setup_jiffy_timer(); + delta = etr_adjust_time(old_clock, clock, delay); + etr_sync.fixup_cc = delta; + fixup_clock_comparator(delta); /* Verify that the clock is properly set. */ if (!etr_aib_follows(sync_port, aib, port)) { /* Didn't work. */ etr_disable_sync_clock(NULL); - in_sync = -EAGAIN; + etr_sync.in_sync = -EAGAIN; rc = -EAGAIN; } else { - in_sync = 1; + etr_sync.in_sync = 1; rc = 0; } } else { @@ -834,7 +724,7 @@ static int etr_sync_clock(struct etr_aib *aib, int port) __ctl_clear_bit(0, 29); __ctl_clear_bit(14, 21); etr_disable_sync_clock(NULL); - in_sync = -EAGAIN; + etr_sync.in_sync = -EAGAIN; rc = -EAGAIN; } local_irq_enable(); diff --git a/arch/s390/lib/delay.c b/arch/s390/lib/delay.c index 70f2a862b670..eae21a8ac72d 100644 --- a/arch/s390/lib/delay.c +++ b/arch/s390/lib/delay.c @@ -34,7 +34,7 @@ void __delay(unsigned long loops) */ void __udelay(unsigned long usecs) { - u64 end, time, jiffy_timer = 0; + u64 end, time, old_cc = 0; unsigned long flags, cr0, mask, dummy; int irq_context; @@ -43,8 +43,8 @@ void __udelay(unsigned long usecs) local_bh_disable(); local_irq_save(flags); if (raw_irqs_disabled_flags(flags)) { - jiffy_timer = S390_lowcore.jiffy_timer; - S390_lowcore.jiffy_timer = -1ULL - (4096 << 12); + old_cc = S390_lowcore.clock_comparator; + S390_lowcore.clock_comparator = -1ULL; __ctl_store(cr0, 0, 0); dummy = (cr0 & 0xffff00e0) | 0x00000800; __ctl_load(dummy , 0, 0); @@ -55,8 +55,8 @@ void __udelay(unsigned long usecs) end = get_clock() + ((u64) usecs << 12); do { - time = end < S390_lowcore.jiffy_timer ? - end : S390_lowcore.jiffy_timer; + time = end < S390_lowcore.clock_comparator ? + end : S390_lowcore.clock_comparator; set_clock_comparator(time); trace_hardirqs_on(); __load_psw_mask(mask); @@ -65,10 +65,10 @@ void __udelay(unsigned long usecs) if (raw_irqs_disabled_flags(flags)) { __ctl_load(cr0, 0, 0); - S390_lowcore.jiffy_timer = jiffy_timer; + S390_lowcore.clock_comparator = old_cc; } if (!irq_context) _local_bh_enable(); - set_clock_comparator(S390_lowcore.jiffy_timer); + set_clock_comparator(S390_lowcore.clock_comparator); local_irq_restore(flags); } diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c index 6dbe9488d3f9..41db3cc653f5 100644 --- a/drivers/s390/cio/cio.c +++ b/drivers/s390/cio/cio.c @@ -651,12 +651,9 @@ do_IRQ (struct pt_regs *regs) old_regs = set_irq_regs(regs); irq_enter(); s390_idle_check(); - if (S390_lowcore.int_clock >= S390_lowcore.jiffy_timer) - /** - * Make sure that the i/o interrupt did not "overtake" - * the last HZ timer interrupt. - */ - account_ticks(S390_lowcore.int_clock); + if (S390_lowcore.int_clock >= S390_lowcore.clock_comparator) + /* Serve timer interrupts first. */ + clock_comparator_work(); /* * Get interrupt information from lowcore */ diff --git a/include/asm-s390/hardirq.h b/include/asm-s390/hardirq.h index 31beb18cb3d1..4b7cb964ff35 100644 --- a/include/asm-s390/hardirq.h +++ b/include/asm-s390/hardirq.h @@ -32,6 +32,6 @@ typedef struct { #define HARDIRQ_BITS 8 -extern void account_ticks(u64 time); +void clock_comparator_work(void); #endif /* __ASM_HARDIRQ_H */ diff --git a/include/asm-s390/lowcore.h b/include/asm-s390/lowcore.h index 801a6fd35b5b..6baa51b8683b 100644 --- a/include/asm-s390/lowcore.h +++ b/include/asm-s390/lowcore.h @@ -80,7 +80,6 @@ #define __LC_CPUID 0xC60 #define __LC_CPUADDR 0xC68 #define __LC_IPLDEV 0xC7C -#define __LC_JIFFY_TIMER 0xC80 #define __LC_CURRENT 0xC90 #define __LC_INT_CLOCK 0xC98 #else /* __s390x__ */ @@ -103,7 +102,6 @@ #define __LC_CPUID 0xD80 #define __LC_CPUADDR 0xD88 #define __LC_IPLDEV 0xDB8 -#define __LC_JIFFY_TIMER 0xDC0 #define __LC_CURRENT 0xDD8 #define __LC_INT_CLOCK 0xDE8 #endif /* __s390x__ */ @@ -276,7 +274,7 @@ struct _lowcore /* entry.S sensitive area end */ /* SMP info area: defined by DJB */ - __u64 jiffy_timer; /* 0xc80 */ + __u64 clock_comparator; /* 0xc80 */ __u32 ext_call_fast; /* 0xc88 */ __u32 percpu_offset; /* 0xc8c */ __u32 current_task; /* 0xc90 */ @@ -368,7 +366,7 @@ struct _lowcore /* entry.S sensitive area end */ /* SMP info area: defined by DJB */ - __u64 jiffy_timer; /* 0xdc0 */ + __u64 clock_comparator; /* 0xdc0 */ __u64 ext_call_fast; /* 0xdc8 */ __u64 percpu_offset; /* 0xdd0 */ __u64 current_task; /* 0xdd8 */ -- cgit v1.2.3-59-g8ed1b From a806170e29c5468b1d641a22518243bdf1b8d58b Mon Sep 17 00:00:00 2001 From: Heiko Carstens Date: Thu, 17 Apr 2008 07:46:26 +0200 Subject: [S390] Fix a lot of sparse warnings. Most noteable part of this commit is the new local header file entry.h which contains all the function declarations of functions that get only called from asm code or are arch internal. That way we can avoid extern declarations in C files. This is more or less the same that was done for sparc64. Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- arch/s390/kernel/compat_linux.h | 73 ++++++++++++++++++++++++++++++++++++++++ arch/s390/kernel/compat_signal.c | 1 + arch/s390/kernel/debug.c | 2 +- arch/s390/kernel/early.c | 1 + arch/s390/kernel/entry.h | 60 +++++++++++++++++++++++++++++++++ arch/s390/kernel/ipl.c | 2 +- arch/s390/kernel/kprobes.c | 2 +- arch/s390/kernel/process.c | 2 ++ arch/s390/kernel/ptrace.c | 1 + arch/s390/kernel/s390_ext.c | 1 + arch/s390/kernel/signal.c | 6 +--- arch/s390/kernel/smp.c | 3 +- arch/s390/kernel/sys_s390.c | 2 +- arch/s390/kernel/time.c | 1 + arch/s390/kernel/traps.c | 5 +-- arch/s390/mm/fault.c | 13 ++----- drivers/s390/block/dasd.c | 5 ++- drivers/s390/block/dasd_alias.c | 49 ++++++++++++--------------- drivers/s390/cio/cio.c | 18 ++++++---- drivers/s390/cio/cio.h | 1 + drivers/s390/cio/device.c | 1 - drivers/s390/cio/device.h | 1 + drivers/s390/s390mach.h | 4 +++ include/asm-s390/cio.h | 4 +++ include/asm-s390/timex.h | 1 + include/asm-s390/tlbflush.h | 3 +- 26 files changed, 198 insertions(+), 64 deletions(-) create mode 100644 arch/s390/kernel/entry.h (limited to 'drivers/s390') diff --git a/arch/s390/kernel/compat_linux.h b/arch/s390/kernel/compat_linux.h index e89f8c0c42a0..20723a062017 100644 --- a/arch/s390/kernel/compat_linux.h +++ b/arch/s390/kernel/compat_linux.h @@ -162,4 +162,77 @@ struct ucontext32 { compat_sigset_t uc_sigmask; /* mask last for extensibility */ }; +struct __sysctl_args32; +struct stat64_emu31; +struct mmap_arg_struct_emu31; +struct fadvise64_64_args; +struct old_sigaction32; +struct old_sigaction32; + +long sys32_chown16(const char __user * filename, u16 user, u16 group); +long sys32_lchown16(const char __user * filename, u16 user, u16 group); +long sys32_fchown16(unsigned int fd, u16 user, u16 group); +long sys32_setregid16(u16 rgid, u16 egid); +long sys32_setgid16(u16 gid); +long sys32_setreuid16(u16 ruid, u16 euid); +long sys32_setuid16(u16 uid); +long sys32_setresuid16(u16 ruid, u16 euid, u16 suid); +long sys32_getresuid16(u16 __user *ruid, u16 __user *euid, u16 __user *suid); +long sys32_setresgid16(u16 rgid, u16 egid, u16 sgid); +long sys32_getresgid16(u16 __user *rgid, u16 __user *egid, u16 __user *sgid); +long sys32_setfsuid16(u16 uid); +long sys32_setfsgid16(u16 gid); +long sys32_getgroups16(int gidsetsize, u16 __user *grouplist); +long sys32_setgroups16(int gidsetsize, u16 __user *grouplist); +long sys32_getuid16(void); +long sys32_geteuid16(void); +long sys32_getgid16(void); +long sys32_getegid16(void); +long sys32_ipc(u32 call, int first, int second, int third, u32 ptr); +long sys32_truncate64(const char __user * path, unsigned long high, + unsigned long low); +long sys32_ftruncate64(unsigned int fd, unsigned long high, unsigned long low); +long sys32_sched_rr_get_interval(compat_pid_t pid, + struct compat_timespec __user *interval); +long sys32_rt_sigprocmask(int how, compat_sigset_t __user *set, + compat_sigset_t __user *oset, size_t sigsetsize); +long sys32_rt_sigpending(compat_sigset_t __user *set, size_t sigsetsize); +long sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo); +long sys32_execve(void); +long sys32_init_module(void __user *umod, unsigned long len, + const char __user *uargs); +long sys32_delete_module(const char __user *name_user, unsigned int flags); +long sys32_gettimeofday(struct compat_timeval __user *tv, + struct timezone __user *tz); +long sys32_settimeofday(struct compat_timeval __user *tv, + struct timezone __user *tz); +long sys32_pause(void); +long sys32_pread64(unsigned int fd, char __user *ubuf, size_t count, + u32 poshi, u32 poslo); +long sys32_pwrite64(unsigned int fd, const char __user *ubuf, + size_t count, u32 poshi, u32 poslo); +compat_ssize_t sys32_readahead(int fd, u32 offhi, u32 offlo, s32 count); +long sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset, + size_t count); +long sys32_sendfile64(int out_fd, int in_fd, compat_loff_t __user *offset, + s32 count); +long sys32_sysctl(struct __sysctl_args32 __user *args); +long sys32_stat64(char __user * filename, struct stat64_emu31 __user * statbuf); +long sys32_lstat64(char __user * filename, + struct stat64_emu31 __user * statbuf); +long sys32_fstat64(unsigned long fd, struct stat64_emu31 __user * statbuf); +long sys32_fstatat64(unsigned int dfd, char __user *filename, + struct stat64_emu31 __user* statbuf, int flag); +unsigned long old32_mmap(struct mmap_arg_struct_emu31 __user *arg); +long sys32_mmap2(struct mmap_arg_struct_emu31 __user *arg); +long sys32_read(unsigned int fd, char __user * buf, size_t count); +long sys32_write(unsigned int fd, char __user * buf, size_t count); +long sys32_clone(void); +long sys32_fadvise64(int fd, loff_t offset, size_t len, int advise); +long sys32_fadvise64_64(struct fadvise64_64_args __user *args); +long sys32_sigaction(int sig, const struct old_sigaction32 __user *act, + struct old_sigaction32 __user *oact); +long sys32_rt_sigaction(int sig, const struct sigaction32 __user *act, + struct sigaction32 __user *oact, size_t sigsetsize); +long sys32_sigaltstack(const stack_t32 __user *uss, stack_t32 __user *uoss); #endif /* _ASM_S390X_S390_H */ diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c index ae2f2d313930..c7f02e777af2 100644 --- a/arch/s390/kernel/compat_signal.c +++ b/arch/s390/kernel/compat_signal.c @@ -29,6 +29,7 @@ #include #include "compat_linux.h" #include "compat_ptrace.h" +#include "entry.h" #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) diff --git a/arch/s390/kernel/debug.c b/arch/s390/kernel/debug.c index 95a46bc008b7..1e7d4ac7068b 100644 --- a/arch/s390/kernel/debug.c +++ b/arch/s390/kernel/debug.c @@ -157,7 +157,7 @@ struct debug_view debug_sprintf_view = { }; /* used by dump analysis tools to determine version of debug feature */ -unsigned int debug_feature_version = __DEBUG_FEATURE_VERSION; +static unsigned int __used debug_feature_version = __DEBUG_FEATURE_VERSION; /* static globals */ diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c index 01832c440636..540a67f979b6 100644 --- a/arch/s390/kernel/early.c +++ b/arch/s390/kernel/early.c @@ -21,6 +21,7 @@ #include #include #include +#include "entry.h" /* * Create a Kernel NSS if the SAVESYS= parameter is defined diff --git a/arch/s390/kernel/entry.h b/arch/s390/kernel/entry.h new file mode 100644 index 000000000000..6b1896345eda --- /dev/null +++ b/arch/s390/kernel/entry.h @@ -0,0 +1,60 @@ +#ifndef _ENTRY_H +#define _ENTRY_H + +#include +#include +#include + +typedef void pgm_check_handler_t(struct pt_regs *, long); +extern pgm_check_handler_t *pgm_check_table[128]; +pgm_check_handler_t do_protection_exception; +pgm_check_handler_t do_dat_exception; + +extern int sysctl_userprocess_debug; + +void do_single_step(struct pt_regs *regs); +void syscall_trace(struct pt_regs *regs, int entryexit); +void kernel_stack_overflow(struct pt_regs * regs); +void do_signal(struct pt_regs *regs); +int handle_signal32(unsigned long sig, struct k_sigaction *ka, + siginfo_t *info, sigset_t *oldset, struct pt_regs *regs); + +void do_extint(struct pt_regs *regs, unsigned short code); +int __cpuinit start_secondary(void *cpuvoid); +void __init startup_init(void); +void die(const char * str, struct pt_regs * regs, long err); + +struct new_utsname; +struct mmap_arg_struct; +struct fadvise64_64_args; +struct old_sigaction; +struct sel_arg_struct; + +long sys_pipe(unsigned long __user *fildes); +long sys_mmap2(struct mmap_arg_struct __user *arg); +long old_mmap(struct mmap_arg_struct __user *arg); +long sys_ipc(uint call, int first, unsigned long second, + unsigned long third, void __user *ptr); +long s390x_newuname(struct new_utsname __user *name); +long s390x_personality(unsigned long personality); +long s390_fadvise64(int fd, u32 offset_high, u32 offset_low, + size_t len, int advice); +long s390_fadvise64_64(struct fadvise64_64_args __user *args); +long s390_fallocate(int fd, int mode, loff_t offset, u32 len_high, u32 len_low); +long sys_fork(void); +long sys_clone(void); +long sys_vfork(void); +void execve_tail(void); +long sys_execve(void); +int sys_sigsuspend(int history0, int history1, old_sigset_t mask); +long sys_sigaction(int sig, const struct old_sigaction __user *act, + struct old_sigaction __user *oact); +long sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss); +long sys_sigreturn(void); +long sys_rt_sigreturn(void); +long sys32_sigreturn(void); +long sys32_rt_sigreturn(void); +long old_select(struct sel_arg_struct __user *arg); +long sys_ptrace(long request, long pid, long addr, long data); + +#endif /* _ENTRY_H */ diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c index 375232c46c7a..532542447d66 100644 --- a/arch/s390/kernel/ipl.c +++ b/arch/s390/kernel/ipl.c @@ -655,7 +655,7 @@ static struct kobj_attribute reipl_type_attr = static struct kset *reipl_kset; -void reipl_run(struct shutdown_trigger *trigger) +static void reipl_run(struct shutdown_trigger *trigger) { struct ccw_dev_id devid; static char buf[100]; diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c index c5549a206284..ed04d1372d5d 100644 --- a/arch/s390/kernel/kprobes.c +++ b/arch/s390/kernel/kprobes.c @@ -360,7 +360,7 @@ no_kprobe: * - When the probed function returns, this probe * causes the handlers to fire */ -void kretprobe_trampoline_holder(void) +static void __used kretprobe_trampoline_holder(void) { asm volatile(".global kretprobe_trampoline\n" "kretprobe_trampoline: bcr 0,0\n"); diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index df033249f6b1..dbefd0db395f 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -45,6 +46,7 @@ #include #include #include +#include "entry.h" asmlinkage void ret_from_fork(void) asm ("ret_from_fork"); diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c index 6e036bae9875..58a064296987 100644 --- a/arch/s390/kernel/ptrace.c +++ b/arch/s390/kernel/ptrace.c @@ -41,6 +41,7 @@ #include #include #include +#include "entry.h" #ifdef CONFIG_COMPAT #include "compat_ptrace.h" diff --git a/arch/s390/kernel/s390_ext.c b/arch/s390/kernel/s390_ext.c index 947d8c74403b..e019b419efc6 100644 --- a/arch/s390/kernel/s390_ext.c +++ b/arch/s390/kernel/s390_ext.c @@ -18,6 +18,7 @@ #include #include #include +#include "entry.h" /* * ext_int_hash[index] is the start of the list for all external interrupts diff --git a/arch/s390/kernel/signal.c b/arch/s390/kernel/signal.c index 8c92191949c2..b97682040215 100644 --- a/arch/s390/kernel/signal.c +++ b/arch/s390/kernel/signal.c @@ -27,6 +27,7 @@ #include #include #include +#include "entry.h" #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) @@ -484,11 +485,6 @@ void do_signal(struct pt_regs *regs) int ret; #ifdef CONFIG_COMPAT if (test_thread_flag(TIF_31BIT)) { - extern int handle_signal32(unsigned long sig, - struct k_sigaction *ka, - siginfo_t *info, - sigset_t *oldset, - struct pt_regs *regs); ret = handle_signal32(signr, &ka, &info, oldset, regs); } else diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index 5a445b1b1217..0dfa988c1b26 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -44,6 +44,7 @@ #include #include #include +#include "entry.h" /* * An array with a pointer the lowcore of every CPU. @@ -297,7 +298,7 @@ static void smp_ext_bitcall(int cpu, ec_bit_sig sig) /* * this function sends a 'purge tlb' signal to another CPU. */ -void smp_ptlb_callback(void *info) +static void smp_ptlb_callback(void *info) { __tlb_flush_local(); } diff --git a/arch/s390/kernel/sys_s390.c b/arch/s390/kernel/sys_s390.c index fefee99f28aa..988d0d64c2c8 100644 --- a/arch/s390/kernel/sys_s390.c +++ b/arch/s390/kernel/sys_s390.c @@ -29,8 +29,8 @@ #include #include #include - #include +#include "entry.h" /* * sys_pipe() is the normal C calling standard for creating diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c index 17c4de9e1b6b..7aec676fefd5 100644 --- a/arch/s390/kernel/time.c +++ b/arch/s390/kernel/time.c @@ -39,6 +39,7 @@ #include #include #include +#include /* change this if you have some constant time drift */ #define USECS_PER_JIFFY ((unsigned long) 1000000/HZ) diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c index 9452a205629b..b3524134f213 100644 --- a/arch/s390/kernel/traps.c +++ b/arch/s390/kernel/traps.c @@ -42,11 +42,8 @@ #include #include #include +#include "entry.h" -/* Called from entry.S only */ -extern void handle_per_exception(struct pt_regs *regs); - -typedef void pgm_check_handler_t(struct pt_regs *, long); pgm_check_handler_t *pgm_check_table[128]; #ifdef CONFIG_SYSCTL diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index a9c635f01db8..2650f46001d0 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -28,11 +28,11 @@ #include #include #include - #include #include #include #include +#include "../kernel/entry.h" #ifndef CONFIG_64BIT #define __FAIL_ADDR_MASK 0x7ffff000 @@ -50,8 +50,6 @@ extern int sysctl_userprocess_debug; #endif -extern void die(const char *,struct pt_regs *,long); - #ifdef CONFIG_KPROBES static inline int notify_page_fault(struct pt_regs *regs, long err) { @@ -245,11 +243,6 @@ static void do_sigbus(struct pt_regs *regs, unsigned long error_code, } #ifdef CONFIG_S390_EXEC_PROTECT -extern long sys_sigreturn(void); -extern long sys_rt_sigreturn(void); -extern long sys32_sigreturn(void); -extern long sys32_rt_sigreturn(void); - static int signal_return(struct mm_struct *mm, struct pt_regs *regs, unsigned long address, unsigned long error_code) { @@ -424,7 +417,7 @@ no_context: } void __kprobes do_protection_exception(struct pt_regs *regs, - unsigned long error_code) + long error_code) { /* Protection exception is supressing, decrement psw address. */ regs->psw.addr -= (error_code >> 16); @@ -440,7 +433,7 @@ void __kprobes do_protection_exception(struct pt_regs *regs, do_exception(regs, 4, 1); } -void __kprobes do_dat_exception(struct pt_regs *regs, unsigned long error_code) +void __kprobes do_dat_exception(struct pt_regs *regs, long error_code) { do_exception(regs, error_code & 0xff, 0); } diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c index bb72e0a5b0e0..ac6d4d3218b3 100644 --- a/drivers/s390/block/dasd.c +++ b/drivers/s390/block/dasd.c @@ -2299,9 +2299,8 @@ int dasd_generic_set_offline(struct ccw_device *cdev) * in the other openers. */ if (device->block) { - struct dasd_block *block = device->block; - max_count = block->bdev ? 0 : -1; - open_count = (int) atomic_read(&block->open_count); + max_count = device->block->bdev ? 0 : -1; + open_count = atomic_read(&device->block->open_count); if (open_count > max_count) { if (open_count > 0) printk(KERN_WARNING "Can't offline dasd " diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c index 3a40bee9d358..2d8df0b30538 100644 --- a/drivers/s390/block/dasd_alias.c +++ b/drivers/s390/block/dasd_alias.c @@ -745,6 +745,19 @@ static void flush_all_alias_devices_on_lcu(struct alias_lcu *lcu) spin_unlock_irqrestore(&lcu->lock, flags); } +static void __stop_device_on_lcu(struct dasd_device *device, + struct dasd_device *pos) +{ + /* If pos == device then device is already locked! */ + if (pos == device) { + pos->stopped |= DASD_STOPPED_SU; + return; + } + spin_lock(get_ccwdev_lock(pos->cdev)); + pos->stopped |= DASD_STOPPED_SU; + spin_unlock(get_ccwdev_lock(pos->cdev)); +} + /* * This function is called in interrupt context, so the * cdev lock for device is already locked! @@ -755,35 +768,15 @@ static void _stop_all_devices_on_lcu(struct alias_lcu *lcu, struct alias_pav_group *pavgroup; struct dasd_device *pos; - list_for_each_entry(pos, &lcu->active_devices, alias_list) { - if (pos != device) - spin_lock(get_ccwdev_lock(pos->cdev)); - pos->stopped |= DASD_STOPPED_SU; - if (pos != device) - spin_unlock(get_ccwdev_lock(pos->cdev)); - } - list_for_each_entry(pos, &lcu->inactive_devices, alias_list) { - if (pos != device) - spin_lock(get_ccwdev_lock(pos->cdev)); - pos->stopped |= DASD_STOPPED_SU; - if (pos != device) - spin_unlock(get_ccwdev_lock(pos->cdev)); - } + list_for_each_entry(pos, &lcu->active_devices, alias_list) + __stop_device_on_lcu(device, pos); + list_for_each_entry(pos, &lcu->inactive_devices, alias_list) + __stop_device_on_lcu(device, pos); list_for_each_entry(pavgroup, &lcu->grouplist, group) { - list_for_each_entry(pos, &pavgroup->baselist, alias_list) { - if (pos != device) - spin_lock(get_ccwdev_lock(pos->cdev)); - pos->stopped |= DASD_STOPPED_SU; - if (pos != device) - spin_unlock(get_ccwdev_lock(pos->cdev)); - } - list_for_each_entry(pos, &pavgroup->aliaslist, alias_list) { - if (pos != device) - spin_lock(get_ccwdev_lock(pos->cdev)); - pos->stopped |= DASD_STOPPED_SU; - if (pos != device) - spin_unlock(get_ccwdev_lock(pos->cdev)); - } + list_for_each_entry(pos, &pavgroup->baselist, alias_list) + __stop_device_on_lcu(device, pos); + list_for_each_entry(pos, &pavgroup->aliaslist, alias_list) + __stop_device_on_lcu(device, pos); } } diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c index 41db3cc653f5..23ffcc4768a7 100644 --- a/drivers/s390/cio/cio.c +++ b/drivers/s390/cio/cio.c @@ -670,10 +670,14 @@ do_IRQ (struct pt_regs *regs) continue; } sch = (struct subchannel *)(unsigned long)tpi_info->intparm; - if (sch) - spin_lock(sch->lock); + if (!sch) { + /* Clear pending interrupt condition. */ + tsch(tpi_info->schid, irb); + continue; + } + spin_lock(sch->lock); /* Store interrupt response block to lowcore. */ - if (tsch (tpi_info->schid, irb) == 0 && sch) { + if (tsch(tpi_info->schid, irb) == 0) { /* Keep subchannel information word up to date. */ memcpy (&sch->schib.scsw, &irb->scsw, sizeof (irb->scsw)); @@ -681,8 +685,7 @@ do_IRQ (struct pt_regs *regs) if (sch->driver && sch->driver->irq) sch->driver->irq(sch); } - if (sch) - spin_unlock(sch->lock); + spin_unlock(sch->lock); /* * Are more interrupts pending? * If so, the tpi instruction will update the lowcore @@ -708,8 +711,9 @@ void *cio_get_console_priv(void) /* * busy wait for the next interrupt on the console */ -void -wait_cons_dev (void) +void wait_cons_dev(void) + __releases(console_subchannel.lock) + __acquires(console_subchannel.lock) { unsigned long cr6 __attribute__ ((aligned (8))); unsigned long save_cr6 __attribute__ ((aligned (8))); diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h index 52afa4c784de..08f2235c5a6f 100644 --- a/drivers/s390/cio/cio.h +++ b/drivers/s390/cio/cio.h @@ -100,6 +100,7 @@ extern int cio_modify (struct subchannel *); int cio_create_sch_lock(struct subchannel *); void do_adapter_IO(void); +void do_IRQ(struct pt_regs *); /* Use with care. */ #ifdef CONFIG_CCW_CONSOLE diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c index fec004f62bcf..e0c7adb8958e 100644 --- a/drivers/s390/cio/device.c +++ b/drivers/s390/cio/device.c @@ -577,7 +577,6 @@ static DEVICE_ATTR(devtype, 0444, devtype_show, NULL); static DEVICE_ATTR(cutype, 0444, cutype_show, NULL); static DEVICE_ATTR(modalias, 0444, modalias_show, NULL); static DEVICE_ATTR(online, 0644, online_show, online_store); -extern struct device_attribute dev_attr_cmb_enable; static DEVICE_ATTR(availability, 0444, available_show, NULL); static struct attribute * subch_attrs[] = { diff --git a/drivers/s390/cio/device.h b/drivers/s390/cio/device.h index d40a2ffaa000..cb08092be39f 100644 --- a/drivers/s390/cio/device.h +++ b/drivers/s390/cio/device.h @@ -127,4 +127,5 @@ extern struct bus_type ccw_bus_type; void retry_set_schib(struct ccw_device *cdev); void cmf_retry_copy_block(struct ccw_device *); int cmf_reenable(struct ccw_device *); +extern struct device_attribute dev_attr_cmb_enable; #endif diff --git a/drivers/s390/s390mach.h b/drivers/s390/s390mach.h index d3ca4281a494..ca681f9b67fc 100644 --- a/drivers/s390/s390mach.h +++ b/drivers/s390/s390mach.h @@ -105,4 +105,8 @@ static inline int stcrw(struct crw *pcrw ) #define ED_ETR_SYNC 12 /* External damage ETR sync check */ #define ED_ETR_SWITCH 13 /* External damage ETR switch to local */ +struct pt_regs; + +void s390_handle_mcck(void); +void s390_do_machine_check(struct pt_regs *regs); #endif /* __s390mach */ diff --git a/include/asm-s390/cio.h b/include/asm-s390/cio.h index 123b557c3ff4..0818ecd30ca6 100644 --- a/include/asm-s390/cio.h +++ b/include/asm-s390/cio.h @@ -397,6 +397,10 @@ struct cio_iplinfo { extern int cio_get_iplinfo(struct cio_iplinfo *iplinfo); +/* Function from drivers/s390/cio/chsc.c */ +int chsc_sstpc(void *page, unsigned int op, u16 ctrl); +int chsc_sstpi(void *page, void *result, size_t size); + #endif #endif diff --git a/include/asm-s390/timex.h b/include/asm-s390/timex.h index 6dd7eecbb8e4..d744c3d62de5 100644 --- a/include/asm-s390/timex.h +++ b/include/asm-s390/timex.h @@ -83,5 +83,6 @@ static inline cycles_t get_cycles(void) int get_sync_clock(unsigned long long *clock); void init_cpu_timer(void); +unsigned long long monotonic_clock(void); #endif diff --git a/include/asm-s390/tlbflush.h b/include/asm-s390/tlbflush.h index de723470c5d4..9e57a93d7de1 100644 --- a/include/asm-s390/tlbflush.h +++ b/include/asm-s390/tlbflush.h @@ -17,9 +17,10 @@ static inline void __tlb_flush_local(void) /* * Flush all tlb entries on all cpus. */ +void smp_ptlb_all(void); + static inline void __tlb_flush_global(void) { - extern void smp_ptlb_all(void); register unsigned long reg2 asm("2"); register unsigned long reg3 asm("3"); register unsigned long reg4 asm("4"); -- cgit v1.2.3-59-g8ed1b From 1749a81d629b1295b38071914728cc2e72066f4d Mon Sep 17 00:00:00 2001 From: Felix Beck Date: Thu, 17 Apr 2008 07:46:28 +0200 Subject: [S390] zcrypt: Comments and kernel-doc cleanup Comments, which suggested to be kernel-doc but were not in the right formatting, have been corrected. Additionally some minor cleanup in the comments has been done. Signed-off-by: Felix Beck Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- drivers/s390/crypto/ap_bus.c | 189 ++++++++++++++++++++++------------- drivers/s390/crypto/ap_bus.h | 15 ++- drivers/s390/crypto/zcrypt_api.c | 63 ++++++++---- drivers/s390/crypto/zcrypt_cca_key.h | 4 +- drivers/s390/crypto/zcrypt_error.h | 2 +- drivers/s390/crypto/zcrypt_pcicc.c | 4 +- drivers/s390/crypto/zcrypt_pcixcc.c | 2 +- 7 files changed, 184 insertions(+), 95 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c index 7b0b81901297..a1ab3e3efd11 100644 --- a/drivers/s390/crypto/ap_bus.c +++ b/drivers/s390/crypto/ap_bus.c @@ -45,7 +45,7 @@ static int ap_poll_thread_start(void); static void ap_poll_thread_stop(void); static void ap_request_timeout(unsigned long); -/** +/* * Module description. */ MODULE_AUTHOR("IBM Corporation"); @@ -53,7 +53,7 @@ MODULE_DESCRIPTION("Adjunct Processor Bus driver, " "Copyright 2006 IBM Corporation"); MODULE_LICENSE("GPL"); -/** +/* * Module parameter */ int ap_domain_index = -1; /* Adjunct Processor Domain Index */ @@ -69,7 +69,7 @@ static struct device *ap_root_device = NULL; static DEFINE_SPINLOCK(ap_device_lock); static LIST_HEAD(ap_device_list); -/** +/* * Workqueue & timer for bus rescan. */ static struct workqueue_struct *ap_work_queue; @@ -77,7 +77,7 @@ static struct timer_list ap_config_timer; static int ap_config_time = AP_CONFIG_TIME; static DECLARE_WORK(ap_config_work, ap_scan_bus); -/** +/* * Tasklet & timer for AP request polling. */ static struct timer_list ap_poll_timer = TIMER_INITIALIZER(ap_poll_timeout,0,0); @@ -88,9 +88,9 @@ static struct task_struct *ap_poll_kthread = NULL; static DEFINE_MUTEX(ap_poll_thread_mutex); /** - * Test if ap instructions are available. + * ap_intructions_available() - Test if AP instructions are available. * - * Returns 0 if the ap instructions are installed. + * Returns 0 if the AP instructions are installed. */ static inline int ap_instructions_available(void) { @@ -108,12 +108,12 @@ static inline int ap_instructions_available(void) } /** - * Test adjunct processor queue. - * @qid: the ap queue number - * @queue_depth: pointer to queue depth value - * @device_type: pointer to device type value + * ap_test_queue(): Test adjunct processor queue. + * @qid: The AP queue number + * @queue_depth: Pointer to queue depth value + * @device_type: Pointer to device type value * - * Returns ap queue status structure. + * Returns AP queue status structure. */ static inline struct ap_queue_status ap_test_queue(ap_qid_t qid, int *queue_depth, int *device_type) @@ -130,10 +130,10 @@ ap_test_queue(ap_qid_t qid, int *queue_depth, int *device_type) } /** - * Reset adjunct processor queue. - * @qid: the ap queue number + * ap_reset_queue(): Reset adjunct processor queue. + * @qid: The AP queue number * - * Returns ap queue status structure. + * Returns AP queue status structure. */ static inline struct ap_queue_status ap_reset_queue(ap_qid_t qid) { @@ -148,16 +148,14 @@ static inline struct ap_queue_status ap_reset_queue(ap_qid_t qid) } /** - * Send message to adjunct processor queue. - * @qid: the ap queue number - * @psmid: the program supplied message identifier - * @msg: the message text - * @length: the message length - * - * Returns ap queue status structure. + * __ap_send(): Send message to adjunct processor queue. + * @qid: The AP queue number + * @psmid: The program supplied message identifier + * @msg: The message text + * @length: The message length * + * Returns AP queue status structure. * Condition code 1 on NQAP can't happen because the L bit is 1. - * * Condition code 2 on NQAP also means the send is incomplete, * because a segment boundary was reached. The NQAP is repeated. */ @@ -198,23 +196,20 @@ int ap_send(ap_qid_t qid, unsigned long long psmid, void *msg, size_t length) } EXPORT_SYMBOL(ap_send); -/* - * Receive message from adjunct processor queue. - * @qid: the ap queue number - * @psmid: pointer to program supplied message identifier - * @msg: the message text - * @length: the message length - * - * Returns ap queue status structure. +/** + * __ap_recv(): Receive message from adjunct processor queue. + * @qid: The AP queue number + * @psmid: Pointer to program supplied message identifier + * @msg: The message text + * @length: The message length * + * Returns AP queue status structure. * Condition code 1 on DQAP means the receive has taken place * but only partially. The response is incomplete, hence the * DQAP is repeated. - * * Condition code 2 on DQAP also means the receive is incomplete, * this time because a segment boundary was reached. Again, the * DQAP is repeated. - * * Note that gpr2 is used by the DQAP instruction to keep track of * any 'residual' length, in case the instruction gets interrupted. * Hence it gets zeroed before the instruction. @@ -263,11 +258,12 @@ int ap_recv(ap_qid_t qid, unsigned long long *psmid, void *msg, size_t length) EXPORT_SYMBOL(ap_recv); /** - * Check if an AP queue is available. The test is repeated for - * AP_MAX_RESET times. - * @qid: the ap queue number - * @queue_depth: pointer to queue depth value - * @device_type: pointer to device type value + * ap_query_queue(): Check if an AP queue is available. + * @qid: The AP queue number + * @queue_depth: Pointer to queue depth value + * @device_type: Pointer to device type value + * + * The test is repeated for AP_MAX_RESET times. */ static int ap_query_queue(ap_qid_t qid, int *queue_depth, int *device_type) { @@ -308,8 +304,10 @@ static int ap_query_queue(ap_qid_t qid, int *queue_depth, int *device_type) } /** + * ap_init_queue(): Reset an AP queue. + * @qid: The AP queue number + * * Reset an AP queue and wait for it to become available again. - * @qid: the ap queue number */ static int ap_init_queue(ap_qid_t qid) { @@ -346,7 +344,10 @@ static int ap_init_queue(ap_qid_t qid) } /** - * Arm request timeout if a AP device was idle and a new request is submitted. + * ap_increase_queue_count(): Arm request timeout. + * @ap_dev: Pointer to an AP device. + * + * Arm request timeout if an AP device was idle and a new request is submitted. */ static void ap_increase_queue_count(struct ap_device *ap_dev) { @@ -360,7 +361,10 @@ static void ap_increase_queue_count(struct ap_device *ap_dev) } /** - * AP device is still alive, re-schedule request timeout if there are still + * ap_decrease_queue_count(): Decrease queue count. + * @ap_dev: Pointer to an AP device. + * + * If AP device is still alive, re-schedule request timeout if there are still * pending requests. */ static void ap_decrease_queue_count(struct ap_device *ap_dev) @@ -371,7 +375,7 @@ static void ap_decrease_queue_count(struct ap_device *ap_dev) if (ap_dev->queue_count > 0) mod_timer(&ap_dev->timeout, jiffies + timeout); else - /** + /* * The timeout timer should to be disabled now - since * del_timer_sync() is very expensive, we just tell via the * reset flag to ignore the pending timeout timer. @@ -379,7 +383,7 @@ static void ap_decrease_queue_count(struct ap_device *ap_dev) ap_dev->reset = AP_RESET_IGNORE; } -/** +/* * AP device related attributes. */ static ssize_t ap_hwtype_show(struct device *dev, @@ -433,6 +437,10 @@ static struct attribute_group ap_dev_attr_group = { }; /** + * ap_bus_match() + * @dev: Pointer to device + * @drv: Pointer to device_driver + * * AP bus driver registration/unregistration. */ static int ap_bus_match(struct device *dev, struct device_driver *drv) @@ -441,7 +449,7 @@ static int ap_bus_match(struct device *dev, struct device_driver *drv) struct ap_driver *ap_drv = to_ap_drv(drv); struct ap_device_id *id; - /** + /* * Compare device type of the device with the list of * supported types of the device_driver. */ @@ -455,8 +463,12 @@ static int ap_bus_match(struct device *dev, struct device_driver *drv) } /** - * uevent function for AP devices. It sets up a single environment - * variable DEV_TYPE which contains the hardware device type. + * ap_uevent(): Uevent function for AP devices. + * @dev: Pointer to device + * @env: Pointer to kobj_uevent_env + * + * It sets up a single environment variable DEV_TYPE which contains the + * hardware device type. */ static int ap_uevent (struct device *dev, struct kobj_uevent_env *env) { @@ -500,8 +512,10 @@ static int ap_device_probe(struct device *dev) } /** + * __ap_flush_queue(): Flush requests. + * @ap_dev: Pointer to the AP device + * * Flush all requests from the request/pending queue of an AP device. - * @ap_dev: pointer to the AP device. */ static void __ap_flush_queue(struct ap_device *ap_dev) { @@ -565,7 +579,7 @@ void ap_driver_unregister(struct ap_driver *ap_drv) } EXPORT_SYMBOL(ap_driver_unregister); -/** +/* * AP bus attributes. */ static ssize_t ap_domain_show(struct bus_type *bus, char *buf) @@ -630,14 +644,16 @@ static struct bus_attribute *const ap_bus_attrs[] = { }; /** - * Pick one of the 16 ap domains. + * ap_select_domain(): Select an AP domain. + * + * Pick one of the 16 AP domains. */ static int ap_select_domain(void) { int queue_depth, device_type, count, max_count, best_domain; int rc, i, j; - /** + /* * We want to use a single domain. Either the one specified with * the "domain=" parameter or the domain with the maximum number * of devices. @@ -669,8 +685,10 @@ static int ap_select_domain(void) } /** - * Find the device type if query queue returned a device type of 0. + * ap_probe_device_type(): Find the device type of an AP. * @ap_dev: pointer to the AP device. + * + * Find the device type if query queue returned a device type of 0. */ static int ap_probe_device_type(struct ap_device *ap_dev) { @@ -764,7 +782,11 @@ out: } /** - * Scan the ap bus for new devices. + * __ap_scan_bus(): Scan the AP bus. + * @dev: Pointer to device + * @data: Pointer to data + * + * Scan the AP bus for new devices. */ static int __ap_scan_bus(struct device *dev, void *data) { @@ -867,6 +889,8 @@ ap_config_timeout(unsigned long ptr) } /** + * ap_schedule_poll_timer(): Schedule poll timer. + * * Set up the timer to run the poll tasklet */ static inline void ap_schedule_poll_timer(void) @@ -877,10 +901,11 @@ static inline void ap_schedule_poll_timer(void) } /** - * Receive pending reply messages from an AP device. + * ap_poll_read(): Receive pending reply messages from an AP device. * @ap_dev: pointer to the AP device * @flags: pointer to control flags, bit 2^0 is set if another poll is * required, bit 2^1 is set if the poll timer needs to get armed + * * Returns 0 if the device is still present, -ENODEV if not. */ static int ap_poll_read(struct ap_device *ap_dev, unsigned long *flags) @@ -925,10 +950,11 @@ static int ap_poll_read(struct ap_device *ap_dev, unsigned long *flags) } /** - * Send messages from the request queue to an AP device. + * ap_poll_write(): Send messages from the request queue to an AP device. * @ap_dev: pointer to the AP device * @flags: pointer to control flags, bit 2^0 is set if another poll is * required, bit 2^1 is set if the poll timer needs to get armed + * * Returns 0 if the device is still present, -ENODEV if not. */ static int ap_poll_write(struct ap_device *ap_dev, unsigned long *flags) @@ -968,11 +994,13 @@ static int ap_poll_write(struct ap_device *ap_dev, unsigned long *flags) } /** - * Poll AP device for pending replies and send new messages. If either - * ap_poll_read or ap_poll_write returns -ENODEV unregister the device. + * ap_poll_queue(): Poll AP device for pending replies and send new messages. * @ap_dev: pointer to the bus device * @flags: pointer to control flags, bit 2^0 is set if another poll is * required, bit 2^1 is set if the poll timer needs to get armed + * + * Poll AP device for pending replies and send new messages. If either + * ap_poll_read or ap_poll_write returns -ENODEV unregister the device. * Returns 0. */ static inline int ap_poll_queue(struct ap_device *ap_dev, unsigned long *flags) @@ -986,9 +1014,11 @@ static inline int ap_poll_queue(struct ap_device *ap_dev, unsigned long *flags) } /** - * Queue a message to a device. + * __ap_queue_message(): Queue a message to a device. * @ap_dev: pointer to the AP device * @ap_msg: the message to be queued + * + * Queue a message to a device. Returns 0 if successful. */ static int __ap_queue_message(struct ap_device *ap_dev, struct ap_message *ap_msg) { @@ -1055,12 +1085,14 @@ void ap_queue_message(struct ap_device *ap_dev, struct ap_message *ap_msg) EXPORT_SYMBOL(ap_queue_message); /** + * ap_cancel_message(): Cancel a crypto request. + * @ap_dev: The AP device that has the message queued + * @ap_msg: The message that is to be removed + * * Cancel a crypto request. This is done by removing the request - * from the devive pendingq or requestq queue. Note that the + * from the device pending or request queue. Note that the * request stays on the AP queue. When it finishes the message * reply will be discarded because the psmid can't be found. - * @ap_dev: AP device that has the message queued - * @ap_msg: the message that is to be removed */ void ap_cancel_message(struct ap_device *ap_dev, struct ap_message *ap_msg) { @@ -1082,7 +1114,10 @@ void ap_cancel_message(struct ap_device *ap_dev, struct ap_message *ap_msg) EXPORT_SYMBOL(ap_cancel_message); /** - * AP receive polling for finished AP requests + * ap_poll_timeout(): AP receive polling for finished AP requests. + * @unused: Unused variable. + * + * Schedules the AP tasklet. */ static void ap_poll_timeout(unsigned long unused) { @@ -1090,6 +1125,9 @@ static void ap_poll_timeout(unsigned long unused) } /** + * ap_reset(): Reset a not responding AP device. + * @ap_dev: Pointer to the AP device + * * Reset a not responding AP device and move all requests from the * pending queue to the request queue. */ @@ -1108,11 +1146,6 @@ static void ap_reset(struct ap_device *ap_dev) ap_dev->unregistered = 1; } -/** - * Poll all AP devices on the bus in a round robin fashion. Continue - * polling until bit 2^0 of the control flags is not set. If bit 2^1 - * of the control flags has been set arm the poll timer. - */ static int __ap_poll_all(struct ap_device *ap_dev, unsigned long *flags) { spin_lock(&ap_dev->lock); @@ -1126,6 +1159,14 @@ static int __ap_poll_all(struct ap_device *ap_dev, unsigned long *flags) return 0; } +/** + * ap_poll_all(): Poll all AP devices. + * @dummy: Unused variable + * + * Poll all AP devices on the bus in a round robin fashion. Continue + * polling until bit 2^0 of the control flags is not set. If bit 2^1 + * of the control flags has been set arm the poll timer. + */ static void ap_poll_all(unsigned long dummy) { unsigned long flags; @@ -1144,6 +1185,9 @@ static void ap_poll_all(unsigned long dummy) } /** + * ap_poll_thread(): Thread that polls for finished requests. + * @data: Unused pointer + * * AP bus poll thread. The purpose of this thread is to poll for * finished requests in a loop if there is a "free" cpu - that is * a cpu that doesn't have anything better to do. The polling stops @@ -1213,7 +1257,10 @@ static void ap_poll_thread_stop(void) } /** - * Handling of request timeouts + * ap_request_timeout(): Handling of request timeouts + * @data: Holds the AP device. + * + * Handles request timeouts. */ static void ap_request_timeout(unsigned long data) { @@ -1246,7 +1293,9 @@ static struct reset_call ap_reset_call = { }; /** - * The module initialization code. + * ap_module_init(): The module initialization code. + * + * Initializes the module. */ int __init ap_module_init(void) { @@ -1288,7 +1337,7 @@ int __init ap_module_init(void) if (ap_select_domain() == 0) ap_scan_bus(NULL); - /* Setup the ap bus rescan timer. */ + /* Setup the AP bus rescan timer. */ init_timer(&ap_config_timer); ap_config_timer.function = ap_config_timeout; ap_config_timer.data = 0; @@ -1325,7 +1374,9 @@ static int __ap_match_all(struct device *dev, void *data) } /** - * The module termination code + * ap_modules_exit(): The module termination code + * + * Terminates the module. */ void ap_module_exit(void) { diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h index 87c2d6442875..c1e1200c43fc 100644 --- a/drivers/s390/crypto/ap_bus.h +++ b/drivers/s390/crypto/ap_bus.h @@ -50,6 +50,15 @@ typedef unsigned int ap_qid_t; #define AP_QID_QUEUE(_qid) ((_qid) & 15) /** + * structy ap_queue_status - Holds the AP queue status. + * @queue_empty: Shows if queue is empty + * @replies_waiting: Waiting replies + * @queue_full: Is 1 if the queue is full + * @pad: A 4 bit pad + * @int_enabled: Shows if interrupts are enabled for the AP + * @response_conde: Holds the 8 bit response code + * @pad2: A 16 bit pad + * * The ap queue status word is returned by all three AP functions * (PQAP, NQAP and DQAP). There's a set of flags in the first * byte, followed by a 1 byte response code. @@ -75,7 +84,7 @@ struct ap_queue_status { #define AP_RESPONSE_NO_FIRST_PART 0x13 #define AP_RESPONSE_MESSAGE_TOO_BIG 0x15 -/** +/* * Known device types */ #define AP_DEVICE_TYPE_PCICC 3 @@ -84,7 +93,7 @@ struct ap_queue_status { #define AP_DEVICE_TYPE_CEX2A 6 #define AP_DEVICE_TYPE_CEX2C 7 -/** +/* * AP reset flag states */ #define AP_RESET_IGNORE 0 /* request timeout will be ignored */ @@ -152,7 +161,7 @@ struct ap_message { .dev_type=(dt), \ .match_flags=AP_DEVICE_ID_MATCH_DEVICE_TYPE, -/** +/* * Note: don't use ap_send/ap_recv after using ap_queue_message * for the first time. Otherwise the ap message queue will get * confused. diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c index b2740ff2e615..4d36e805a234 100644 --- a/drivers/s390/crypto/zcrypt_api.c +++ b/drivers/s390/crypto/zcrypt_api.c @@ -40,7 +40,7 @@ #include "zcrypt_api.h" -/** +/* * Module description. */ MODULE_AUTHOR("IBM Corporation"); @@ -56,7 +56,7 @@ static atomic_t zcrypt_open_count = ATOMIC_INIT(0); static int zcrypt_rng_device_add(void); static void zcrypt_rng_device_remove(void); -/** +/* * Device attributes common for all crypto devices. */ static ssize_t zcrypt_type_show(struct device *dev, @@ -103,6 +103,9 @@ static struct attribute_group zcrypt_device_attr_group = { }; /** + * __zcrypt_increase_preference(): Increase preference of a crypto device. + * @zdev: Pointer the crypto device + * * Move the device towards the head of the device list. * Need to be called while holding the zcrypt device list lock. * Note: cards with speed_rating of 0 are kept at the end of the list. @@ -129,6 +132,9 @@ static void __zcrypt_increase_preference(struct zcrypt_device *zdev) } /** + * __zcrypt_decrease_preference(): Decrease preference of a crypto device. + * @zdev: Pointer to a crypto device. + * * Move the device towards the tail of the device list. * Need to be called while holding the zcrypt device list lock. * Note: cards with speed_rating of 0 are kept at the end of the list. @@ -202,7 +208,10 @@ void zcrypt_device_free(struct zcrypt_device *zdev) EXPORT_SYMBOL(zcrypt_device_free); /** - * Register a crypto device. + * zcrypt_device_register() - Register a crypto device. + * @zdev: Pointer to a crypto device + * + * Register a crypto device. Returns 0 if successful. */ int zcrypt_device_register(struct zcrypt_device *zdev) { @@ -242,6 +251,9 @@ out: EXPORT_SYMBOL(zcrypt_device_register); /** + * zcrypt_device_unregister(): Unregister a crypto device. + * @zdev: Pointer to crypto device + * * Unregister a crypto device. */ void zcrypt_device_unregister(struct zcrypt_device *zdev) @@ -260,7 +272,9 @@ void zcrypt_device_unregister(struct zcrypt_device *zdev) EXPORT_SYMBOL(zcrypt_device_unregister); /** - * zcrypt_read is not be supported beyond zcrypt 1.3.1 + * zcrypt_read (): Not supported beyond zcrypt 1.3.1. + * + * This function is not supported beyond zcrypt 1.3.1. */ static ssize_t zcrypt_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos) @@ -269,6 +283,8 @@ static ssize_t zcrypt_read(struct file *filp, char __user *buf, } /** + * zcrypt_write(): Not allowed. + * * Write is is not allowed */ static ssize_t zcrypt_write(struct file *filp, const char __user *buf, @@ -278,7 +294,9 @@ static ssize_t zcrypt_write(struct file *filp, const char __user *buf, } /** - * Device open/close functions to count number of users. + * zcrypt_open(): Count number of users. + * + * Device open function to count number of users. */ static int zcrypt_open(struct inode *inode, struct file *filp) { @@ -286,13 +304,18 @@ static int zcrypt_open(struct inode *inode, struct file *filp) return 0; } +/** + * zcrypt_release(): Count number of users. + * + * Device close function to count number of users. + */ static int zcrypt_release(struct inode *inode, struct file *filp) { atomic_dec(&zcrypt_open_count); return 0; } -/** +/* * zcrypt ioctls. */ static long zcrypt_rsa_modexpo(struct ica_rsa_modexpo *mex) @@ -302,7 +325,7 @@ static long zcrypt_rsa_modexpo(struct ica_rsa_modexpo *mex) if (mex->outputdatalength < mex->inputdatalength) return -EINVAL; - /** + /* * As long as outputdatalength is big enough, we can set the * outputdatalength equal to the inputdatalength, since that is the * number of bytes we will copy in any case @@ -348,7 +371,7 @@ static long zcrypt_rsa_crt(struct ica_rsa_modexpo_crt *crt) if (crt->outputdatalength < crt->inputdatalength || (crt->inputdatalength & 1)) return -EINVAL; - /** + /* * As long as outputdatalength is big enough, we can set the * outputdatalength equal to the inputdatalength, since that is the * number of bytes we will copy in any case @@ -365,7 +388,7 @@ static long zcrypt_rsa_crt(struct ica_rsa_modexpo_crt *crt) zdev->max_mod_size < crt->inputdatalength) continue; if (zdev->short_crt && crt->inputdatalength > 240) { - /** + /* * Check inputdata for leading zeros for cards * that can't handle np_prime, bp_key, or * u_mult_inv > 128 bytes. @@ -381,7 +404,7 @@ static long zcrypt_rsa_crt(struct ica_rsa_modexpo_crt *crt) copy_from_user(&z3, crt->u_mult_inv, len)) return -EFAULT; copied = 1; - /** + /* * We have to restart device lookup - * the device list may have changed by now. */ @@ -567,6 +590,8 @@ static int zcrypt_count_type(int type) } /** + * zcrypt_ica_status(): Old, depracted combi status call. + * * Old, deprecated combi status call. */ static long zcrypt_ica_status(struct file *filp, unsigned long arg) @@ -668,7 +693,7 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd, (int __user *) arg); case Z90STAT_DOMAIN_INDEX: return put_user(ap_domain_index, (int __user *) arg); - /** + /* * Deprecated ioctls. Don't add another device count ioctl, * you can count them yourself in the user space with the * output of the Z90STAT_STATUS_MASK ioctl. @@ -706,7 +731,7 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd, } #ifdef CONFIG_COMPAT -/** +/* * ioctl32 conversion routines */ struct compat_ica_rsa_modexpo { @@ -857,7 +882,7 @@ static long zcrypt_compat_ioctl(struct file *filp, unsigned int cmd, } #endif -/** +/* * Misc device file operations. */ static const struct file_operations zcrypt_fops = { @@ -872,7 +897,7 @@ static const struct file_operations zcrypt_fops = { .release = zcrypt_release }; -/** +/* * Misc device. */ static struct miscdevice zcrypt_misc_device = { @@ -881,7 +906,7 @@ static struct miscdevice zcrypt_misc_device = { .fops = &zcrypt_fops, }; -/** +/* * Deprecated /proc entry support. */ static struct proc_dir_entry *zcrypt_entry; @@ -1075,7 +1100,7 @@ static int zcrypt_status_write(struct file *file, const char __user *buffer, } for (j = 0; j < 64 && *ptr; ptr++) { - /** + /* * '0' for no device, '1' for PCICA, '2' for PCICC, * '3' for PCIXCC_MCL2, '4' for PCIXCC_MCL3, * '5' for CEX2C and '6' for CEX2A' @@ -1103,7 +1128,7 @@ static int zcrypt_rng_data_read(struct hwrng *rng, u32 *data) { int rc; - /** + /* * We don't need locking here because the RNG API guarantees serialized * read method calls. */ @@ -1162,6 +1187,8 @@ static void zcrypt_rng_device_remove(void) } /** + * zcrypt_api_init(): Module initialization. + * * The module initialization code. */ int __init zcrypt_api_init(void) @@ -1196,6 +1223,8 @@ out: } /** + * zcrypt_api_exit(): Module termination. + * * The module termination code. */ void zcrypt_api_exit(void) diff --git a/drivers/s390/crypto/zcrypt_cca_key.h b/drivers/s390/crypto/zcrypt_cca_key.h index 8dbcf0eef3e5..ed82f2f59b17 100644 --- a/drivers/s390/crypto/zcrypt_cca_key.h +++ b/drivers/s390/crypto/zcrypt_cca_key.h @@ -174,7 +174,7 @@ static inline int zcrypt_type6_mex_key_de(struct ica_rsa_modexpo *mex, key->pvtMeHdr = static_pvt_me_hdr; key->pvtMeSec = static_pvt_me_sec; key->pubMeSec = static_pub_me_sec; - /** + /* * In a private key, the modulus doesn't appear in the public * section. So, an arbitrary public exponent of 0x010001 will be * used. @@ -338,7 +338,7 @@ static inline int zcrypt_type6_crt_key(struct ica_rsa_modexpo_crt *crt, pub = (struct cca_public_sec *)(key->key_parts + key_len); *pub = static_cca_pub_sec; pub->modulus_bit_len = 8 * crt->inputdatalength; - /** + /* * In a private key, the modulus doesn't appear in the public * section. So, an arbitrary public exponent of 0x010001 will be * used. diff --git a/drivers/s390/crypto/zcrypt_error.h b/drivers/s390/crypto/zcrypt_error.h index 2cb616ba8bec..3e27fe77d207 100644 --- a/drivers/s390/crypto/zcrypt_error.h +++ b/drivers/s390/crypto/zcrypt_error.h @@ -108,7 +108,7 @@ static inline int convert_error(struct zcrypt_device *zdev, return -EINVAL; case REP82_ERROR_MESSAGE_TYPE: // REP88_ERROR_MESSAGE_TYPE // '20' CEX2A - /** + /* * To sent a message of the wrong type is a bug in the * device driver. Warn about it, disable the device * and then repeat the request. diff --git a/drivers/s390/crypto/zcrypt_pcicc.c b/drivers/s390/crypto/zcrypt_pcicc.c index d6d59bf9ac38..17ea56ce1c11 100644 --- a/drivers/s390/crypto/zcrypt_pcicc.c +++ b/drivers/s390/crypto/zcrypt_pcicc.c @@ -42,7 +42,7 @@ #define PCICC_MAX_MOD_SIZE_OLD 128 /* 1024 bits */ #define PCICC_MAX_MOD_SIZE 256 /* 2048 bits */ -/** +/* * PCICC cards need a speed rating of 0. This keeps them at the end of * the zcrypt device list (see zcrypt_api.c). PCICC cards are only * used if no other cards are present because they are slow and can only @@ -388,7 +388,7 @@ static int convert_type86(struct zcrypt_device *zdev, reply_len = le16_to_cpu(msg->length) - 2; if (reply_len > outputdatalength) return -EINVAL; - /** + /* * For all encipher requests, the length of the ciphertext (reply_len) * will always equal the modulus length. For MEX decipher requests * the output needs to get padded. Minimum pad size is 10. diff --git a/drivers/s390/crypto/zcrypt_pcixcc.c b/drivers/s390/crypto/zcrypt_pcixcc.c index 3674bfa82b65..0bc9b3188e64 100644 --- a/drivers/s390/crypto/zcrypt_pcixcc.c +++ b/drivers/s390/crypto/zcrypt_pcixcc.c @@ -501,7 +501,7 @@ static int convert_type86_ica(struct zcrypt_device *zdev, reply_len = msg->length - 2; if (reply_len > outputdatalength) return -EINVAL; - /** + /* * For all encipher requests, the length of the ciphertext (reply_len) * will always equal the modulus length. For MEX decipher requests * the output needs to get padded. Minimum pad size is 10. -- cgit v1.2.3-59-g8ed1b From ca68305bf3c76c4a7cd1c77d5423219f39164df8 Mon Sep 17 00:00:00 2001 From: Martin Schwidefsky Date: Thu, 17 Apr 2008 07:46:31 +0200 Subject: [S390] Remove code duplication from monreader / dcssblk. Move the function that prints the segment warning messages found in the monreader driver and the dcssblk driver to the extmem base code. Signed-off-by: Martin Schwidefsky Signed-off-by: Heiko Carstens --- arch/s390/mm/extmem.c | 67 +++++++++++++++++++++++++++++++++---------- drivers/s390/block/dcssblk.c | 53 +--------------------------------- drivers/s390/char/monreader.c | 54 ++-------------------------------- include/asm-s390/extmem.h | 11 +++---- 4 files changed, 61 insertions(+), 124 deletions(-) (limited to 'drivers/s390') diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c index 880b0ebf894b..ed2af0a3303b 100644 --- a/arch/s390/mm/extmem.c +++ b/arch/s390/mm/extmem.c @@ -289,22 +289,8 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long rc = add_shared_memory(seg->start_addr, seg->end - seg->start_addr + 1); - switch (rc) { - case 0: - break; - case -ENOSPC: - PRINT_WARN("segment_load: not loading segment %s - overlaps " - "storage/segment\n", name); - goto out_free; - case -ERANGE: - PRINT_WARN("segment_load: not loading segment %s - exceeds " - "kernel mapping range\n", name); - goto out_free; - default: - PRINT_WARN("segment_load: not loading segment %s (rc: %d)\n", - name, rc); + if (rc) goto out_free; - } seg->res = kzalloc(sizeof(struct resource), GFP_KERNEL); if (seg->res == NULL) { @@ -582,8 +568,59 @@ out: mutex_unlock(&dcss_lock); } +/* + * print appropriate error message for segment_load()/segment_type() + * return code + */ +void segment_warning(int rc, char *seg_name) +{ + switch (rc) { + case -ENOENT: + PRINT_WARN("cannot load/query segment %s, " + "does not exist\n", seg_name); + break; + case -ENOSYS: + PRINT_WARN("cannot load/query segment %s, " + "not running on VM\n", seg_name); + break; + case -EIO: + PRINT_WARN("cannot load/query segment %s, " + "hardware error\n", seg_name); + break; + case -ENOTSUPP: + PRINT_WARN("cannot load/query segment %s, " + "is a multi-part segment\n", seg_name); + break; + case -ENOSPC: + PRINT_WARN("cannot load/query segment %s, " + "overlaps with storage\n", seg_name); + break; + case -EBUSY: + PRINT_WARN("cannot load/query segment %s, " + "overlaps with already loaded dcss\n", seg_name); + break; + case -EPERM: + PRINT_WARN("cannot load/query segment %s, " + "already loaded in incompatible mode\n", seg_name); + break; + case -ENOMEM: + PRINT_WARN("cannot load/query segment %s, " + "out of memory\n", seg_name); + break; + case -ERANGE: + PRINT_WARN("cannot load/query segment %s, " + "exceeds kernel mapping range\n", seg_name); + break; + default: + PRINT_WARN("cannot load/query segment %s, " + "return value %i\n", seg_name, rc); + break; + } +} + EXPORT_SYMBOL(segment_load); EXPORT_SYMBOL(segment_unload); EXPORT_SYMBOL(segment_save); EXPORT_SYMBOL(segment_type); EXPORT_SYMBOL(segment_modify_shared); +EXPORT_SYMBOL(segment_warning); diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index e6c94dbfdeaa..04787eab1016 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -142,57 +142,6 @@ dcssblk_get_device_by_name(char *name) return NULL; } -/* - * print appropriate error message for segment_load()/segment_type() - * return code - */ -static void -dcssblk_segment_warn(int rc, char* seg_name) -{ - switch (rc) { - case -ENOENT: - PRINT_WARN("cannot load/query segment %s, does not exist\n", - seg_name); - break; - case -ENOSYS: - PRINT_WARN("cannot load/query segment %s, not running on VM\n", - seg_name); - break; - case -EIO: - PRINT_WARN("cannot load/query segment %s, hardware error\n", - seg_name); - break; - case -ENOTSUPP: - PRINT_WARN("cannot load/query segment %s, is a multi-part " - "segment\n", seg_name); - break; - case -ENOSPC: - PRINT_WARN("cannot load/query segment %s, overlaps with " - "storage\n", seg_name); - break; - case -EBUSY: - PRINT_WARN("cannot load/query segment %s, overlaps with " - "already loaded dcss\n", seg_name); - break; - case -EPERM: - PRINT_WARN("cannot load/query segment %s, already loaded in " - "incompatible mode\n", seg_name); - break; - case -ENOMEM: - PRINT_WARN("cannot load/query segment %s, out of memory\n", - seg_name); - break; - case -ERANGE: - PRINT_WARN("cannot load/query segment %s, exceeds kernel " - "mapping range\n", seg_name); - break; - default: - PRINT_WARN("cannot load/query segment %s, return value %i\n", - seg_name, rc); - break; - } -} - static void dcssblk_unregister_callback(struct device *dev) { device_unregister(dev); @@ -423,7 +372,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char rc = segment_load(local_buf, SEGMENT_SHARED, &dev_info->start, &dev_info->end); if (rc < 0) { - dcssblk_segment_warn(rc, dev_info->segment_name); + segment_warning(rc, dev_info->segment_name); goto dealloc_gendisk; } seg_byte_size = (dev_info->end - dev_info->start + 1); diff --git a/drivers/s390/char/monreader.c b/drivers/s390/char/monreader.c index 67009bfa093e..1e1f50655bbf 100644 --- a/drivers/s390/char/monreader.c +++ b/drivers/s390/char/monreader.c @@ -111,56 +111,6 @@ static void dcss_mkname(char *ascii_name, char *ebcdic_name) ASCEBC(ebcdic_name, 8); } -/* - * print appropriate error message for segment_load()/segment_type() - * return code - */ -static void mon_segment_warn(int rc, char* seg_name) -{ - switch (rc) { - case -ENOENT: - P_WARNING("cannot load/query segment %s, does not exist\n", - seg_name); - break; - case -ENOSYS: - P_WARNING("cannot load/query segment %s, not running on VM\n", - seg_name); - break; - case -EIO: - P_WARNING("cannot load/query segment %s, hardware error\n", - seg_name); - break; - case -ENOTSUPP: - P_WARNING("cannot load/query segment %s, is a multi-part " - "segment\n", seg_name); - break; - case -ENOSPC: - P_WARNING("cannot load/query segment %s, overlaps with " - "storage\n", seg_name); - break; - case -EBUSY: - P_WARNING("cannot load/query segment %s, overlaps with " - "already loaded dcss\n", seg_name); - break; - case -EPERM: - P_WARNING("cannot load/query segment %s, already loaded in " - "incompatible mode\n", seg_name); - break; - case -ENOMEM: - P_WARNING("cannot load/query segment %s, out of memory\n", - seg_name); - break; - case -ERANGE: - P_WARNING("cannot load/query segment %s, exceeds kernel " - "mapping range\n", seg_name); - break; - default: - P_WARNING("cannot load/query segment %s, return value %i\n", - seg_name, rc); - break; - } -} - static inline unsigned long mon_mca_start(struct mon_msg *monmsg) { return *(u32 *) &monmsg->msg.rmmsg; @@ -585,7 +535,7 @@ static int __init mon_init(void) rc = segment_type(mon_dcss_name); if (rc < 0) { - mon_segment_warn(rc, mon_dcss_name); + segment_warning(rc, mon_dcss_name); goto out_iucv; } if (rc != SEG_TYPE_SC) { @@ -598,7 +548,7 @@ static int __init mon_init(void) rc = segment_load(mon_dcss_name, SEGMENT_SHARED, &mon_dcss_start, &mon_dcss_end); if (rc < 0) { - mon_segment_warn(rc, mon_dcss_name); + segment_warning(rc, mon_dcss_name); rc = -EINVAL; goto out_iucv; } diff --git a/include/asm-s390/extmem.h b/include/asm-s390/extmem.h index c8802c934b74..33837d756184 100644 --- a/include/asm-s390/extmem.h +++ b/include/asm-s390/extmem.h @@ -22,11 +22,12 @@ #define SEGMENT_SHARED 0 #define SEGMENT_EXCLUSIVE 1 -extern int segment_load (char *name,int segtype,unsigned long *addr,unsigned long *length); -extern void segment_unload(char *name); -extern void segment_save(char *name); -extern int segment_type (char* name); -extern int segment_modify_shared (char *name, int do_nonshared); +int segment_load (char *name, int segtype, unsigned long *addr, unsigned long *length); +void segment_unload(char *name); +void segment_save(char *name); +int segment_type (char* name); +int segment_modify_shared (char *name, int do_nonshared); +void segment_warning(int rc, char *seg_name); #endif #endif -- cgit v1.2.3-59-g8ed1b From ee95a16d3950367d32beb6ffed287666631dbda9 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Thu, 17 Apr 2008 00:08:03 +0200 Subject: [SCSI] zfcp: fix compiler warning caused by poking inside new semaphore (linux-next) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit as seen in linux-next tree: drivers/s390/scsi/zfcp_dbf.c: In function ‘zfcp_rec_dbf_event_thread’: drivers/s390/scsi/zfcp_dbf.c:697: warning: passing argument 1 of ‘atomic_read’ from incompatible pointer type Caused by recent git commit: commit 348447e85749120ad600a5c8e23b6bb7058b931d Author: Martin Peschke Date: Thu Mar 27 14:22:01 2008 +0100 [SCSI] zfcp: Add trace records for recovery thread and its queues We are not supposed to poke inside semaphore. Signed-off-by: Martin Peschke Acked-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_dbf.c | 2 -- drivers/s390/scsi/zfcp_dbf.h | 1 - 2 files changed, 3 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index 85ba4cc4190e..c34a874482a5 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -623,7 +623,6 @@ static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view, zfcp_dbf_out(&p, "id", "%d", r->id2); switch (r->id) { case ZFCP_REC_DBF_ID_THREAD: - zfcp_dbf_out(&p, "sema", "%d", r->u.thread.sema); zfcp_dbf_out(&p, "total", "%d", r->u.thread.total); zfcp_dbf_out(&p, "ready", "%d", r->u.thread.ready); zfcp_dbf_out(&p, "running", "%d", r->u.thread.running); @@ -694,7 +693,6 @@ void zfcp_rec_dbf_event_thread(u8 id2, struct zfcp_adapter *adapter, int lock) memset(r, 0, sizeof(*r)); r->id = ZFCP_REC_DBF_ID_THREAD; r->id2 = id2; - r->u.thread.sema = atomic_read(&adapter->erp_ready_sem.count); r->u.thread.total = total; r->u.thread.ready = ready; r->u.thread.running = running; diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h index 732a5ba1bea9..54c34e483457 100644 --- a/drivers/s390/scsi/zfcp_dbf.h +++ b/drivers/s390/scsi/zfcp_dbf.h @@ -35,7 +35,6 @@ struct zfcp_dbf_dump { } __attribute__ ((packed)); struct zfcp_rec_dbf_record_thread { - u32 sema; u32 total; u32 ready; u32 running; -- cgit v1.2.3-59-g8ed1b From 1f6f7129ebac007629b28764bfa5147817682692 Mon Sep 17 00:00:00 2001 From: Martin Peschke Date: Fri, 18 Apr 2008 12:51:55 +0200 Subject: [SCSI] zfcp: fix 31 bit compile warnings MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit drivers/s390/scsi/zfcp_aux.c: In function ‘zfcp_fsf_incoming_els_rscn’: drivers/s390/scsi/zfcp_aux.c:1379: warning: cast from pointer to integer of different size drivers/s390/scsi/zfcp_aux.c: In function ‘zfcp_fsf_incoming_els_plogi’: drivers/s390/scsi/zfcp_aux.c:1432: warning: cast from pointer to integer of different size drivers/s390/scsi/zfcp_aux.c: In function ‘zfcp_fsf_incoming_els_logo’: drivers/s390/scsi/zfcp_aux.c:1457: warning: cast from pointer to integer of different size .. Just passing pointers rids us of these warnings and improves readability. Signed-off-by: Martin Peschke Signed-off-by: Christof Schmitt Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_aux.c | 6 +- drivers/s390/scsi/zfcp_ccw.c | 19 +++-- drivers/s390/scsi/zfcp_dbf.c | 22 ++--- drivers/s390/scsi/zfcp_erp.c | 140 ++++++++++++++++--------------- drivers/s390/scsi/zfcp_ext.h | 60 ++++++------- drivers/s390/scsi/zfcp_fsf.c | 149 +++++++++++++++------------------ drivers/s390/scsi/zfcp_qdio.c | 3 +- drivers/s390/scsi/zfcp_scsi.c | 4 +- drivers/s390/scsi/zfcp_sysfs_adapter.c | 9 +- drivers/s390/scsi/zfcp_sysfs_port.c | 8 +- drivers/s390/scsi/zfcp_sysfs_unit.c | 4 +- 11 files changed, 211 insertions(+), 213 deletions(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c index 05a33c247c68..8c7e2b778ef1 100644 --- a/drivers/s390/scsi/zfcp_aux.c +++ b/drivers/s390/scsi/zfcp_aux.c @@ -1376,7 +1376,7 @@ static void zfcp_fsf_incoming_els_rscn(struct zfcp_fsf_req *fsf_req) "port 0x%016Lx\n", port->wwpn); zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, - 82, (u64)fsf_req); + 82, fsf_req); continue; } @@ -1429,7 +1429,7 @@ static void zfcp_fsf_incoming_els_plogi(struct zfcp_fsf_req *fsf_req) status_buffer->d_id, zfcp_get_busid_by_adapter(adapter)); } else { - zfcp_erp_port_forced_reopen(port, 0, 83, (u64)fsf_req); + zfcp_erp_port_forced_reopen(port, 0, 83, fsf_req); } } @@ -1454,7 +1454,7 @@ static void zfcp_fsf_incoming_els_logo(struct zfcp_fsf_req *fsf_req) status_buffer->d_id, zfcp_get_busid_by_adapter(adapter)); } else { - zfcp_erp_port_forced_reopen(port, 0, 84, (u64)fsf_req); + zfcp_erp_port_forced_reopen(port, 0, 84, fsf_req); } } diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c index db9f538362a6..66d3b88844b0 100644 --- a/drivers/s390/scsi/zfcp_ccw.c +++ b/drivers/s390/scsi/zfcp_ccw.c @@ -170,9 +170,10 @@ zfcp_ccw_set_online(struct ccw_device *ccw_device) BUG_ON(!zfcp_reqlist_isempty(adapter)); adapter->req_no = 0; - zfcp_erp_modify_adapter_status(adapter, 10, 0, + zfcp_erp_modify_adapter_status(adapter, 10, NULL, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 85, 0); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 85, + NULL); zfcp_erp_wait(adapter); goto out; @@ -197,7 +198,7 @@ zfcp_ccw_set_offline(struct ccw_device *ccw_device) down(&zfcp_data.config_sema); adapter = dev_get_drvdata(&ccw_device->dev); - zfcp_erp_adapter_shutdown(adapter, 0, 86, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 86, NULL); zfcp_erp_wait(adapter); zfcp_erp_thread_kill(adapter); up(&zfcp_data.config_sema); @@ -223,21 +224,21 @@ zfcp_ccw_notify(struct ccw_device *ccw_device, int event) case CIO_GONE: ZFCP_LOG_NORMAL("adapter %s: device gone\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 87, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 87, NULL); break; case CIO_NO_PATH: ZFCP_LOG_NORMAL("adapter %s: no path\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 88, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 88, NULL); break; case CIO_OPER: ZFCP_LOG_NORMAL("adapter %s: operational again\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_modify_adapter_status(adapter, 11, 0, + zfcp_erp_modify_adapter_status(adapter, 11, NULL, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_adapter_reopen(adapter, - ZFCP_STATUS_COMMON_ERP_FAILED, 89, 0); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, + 89, NULL); break; } zfcp_erp_wait(adapter); @@ -269,7 +270,7 @@ zfcp_ccw_shutdown(struct ccw_device *cdev) down(&zfcp_data.config_sema); adapter = dev_get_drvdata(&cdev->dev); - zfcp_erp_adapter_shutdown(adapter, 0, 90, 0); + zfcp_erp_adapter_shutdown(adapter, 0, 90, NULL); zfcp_erp_wait(adapter); up(&zfcp_data.config_sema); } diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c index c34a874482a5..37b85c67b11d 100644 --- a/drivers/s390/scsi/zfcp_dbf.c +++ b/drivers/s390/scsi/zfcp_dbf.c @@ -700,7 +700,7 @@ void zfcp_rec_dbf_event_thread(u8 id2, struct zfcp_adapter *adapter, int lock) spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); } -static void zfcp_rec_dbf_event_target(u8 id2, u64 ref, +static void zfcp_rec_dbf_event_target(u8 id2, void *ref, struct zfcp_adapter *adapter, atomic_t *status, atomic_t *erp_count, u64 wwpn, u32 d_id, u64 fcp_lun) @@ -712,7 +712,7 @@ static void zfcp_rec_dbf_event_target(u8 id2, u64 ref, memset(r, 0, sizeof(*r)); r->id = ZFCP_REC_DBF_ID_TARGET; r->id2 = id2; - r->u.target.ref = ref; + r->u.target.ref = (unsigned long)ref; r->u.target.status = atomic_read(status); r->u.target.wwpn = wwpn; r->u.target.d_id = d_id; @@ -728,7 +728,7 @@ static void zfcp_rec_dbf_event_target(u8 id2, u64 ref, * @ref: additional reference (e.g. request) * @adapter: adapter */ -void zfcp_rec_dbf_event_adapter(u8 id, u64 ref, struct zfcp_adapter *adapter) +void zfcp_rec_dbf_event_adapter(u8 id, void *ref, struct zfcp_adapter *adapter) { zfcp_rec_dbf_event_target(id, ref, adapter, &adapter->status, &adapter->erp_counter, 0, 0, 0); @@ -740,7 +740,7 @@ void zfcp_rec_dbf_event_adapter(u8 id, u64 ref, struct zfcp_adapter *adapter) * @ref: additional reference (e.g. request) * @port: port */ -void zfcp_rec_dbf_event_port(u8 id, u64 ref, struct zfcp_port *port) +void zfcp_rec_dbf_event_port(u8 id, void *ref, struct zfcp_port *port) { struct zfcp_adapter *adapter = port->adapter; @@ -755,7 +755,7 @@ void zfcp_rec_dbf_event_port(u8 id, u64 ref, struct zfcp_port *port) * @ref: additional reference (e.g. request) * @unit: unit */ -void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit) +void zfcp_rec_dbf_event_unit(u8 id, void *ref, struct zfcp_unit *unit) { struct zfcp_port *port = unit->port; struct zfcp_adapter *adapter = port->adapter; @@ -776,8 +776,8 @@ void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit) * @port: port * @unit: unit */ -void zfcp_rec_dbf_event_trigger(u8 id2, u64 ref, u8 want, u8 need, u64 action, - struct zfcp_adapter *adapter, +void zfcp_rec_dbf_event_trigger(u8 id2, void *ref, u8 want, u8 need, + void *action, struct zfcp_adapter *adapter, struct zfcp_port *port, struct zfcp_unit *unit) { struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf; @@ -787,10 +787,10 @@ void zfcp_rec_dbf_event_trigger(u8 id2, u64 ref, u8 want, u8 need, u64 action, memset(r, 0, sizeof(*r)); r->id = ZFCP_REC_DBF_ID_TRIGGER; r->id2 = id2; - r->u.trigger.ref = ref; + r->u.trigger.ref = (unsigned long)ref; r->u.trigger.want = want; r->u.trigger.need = need; - r->u.trigger.action = action; + r->u.trigger.action = (unsigned long)action; r->u.trigger.as = atomic_read(&adapter->status); if (port) { r->u.trigger.ps = atomic_read(&port->status); @@ -819,10 +819,10 @@ void zfcp_rec_dbf_event_action(u8 id2, struct zfcp_erp_action *erp_action) memset(r, 0, sizeof(*r)); r->id = ZFCP_REC_DBF_ID_ACTION; r->id2 = id2; - r->u.action.action = (u64)erp_action; + r->u.action.action = (unsigned long)erp_action; r->u.action.status = erp_action->status; r->u.action.step = erp_action->step; - r->u.action.fsf_req = (u64)erp_action->fsf_req; + r->u.action.fsf_req = (unsigned long)erp_action->fsf_req; debug_event(adapter->rec_dbf, 4, r, sizeof(*r)); spin_unlock_irqrestore(&adapter->rec_dbf_lock, flags); } diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c index feb1fda33d25..805484658dd9 100644 --- a/drivers/s390/scsi/zfcp_erp.c +++ b/drivers/s390/scsi/zfcp_erp.c @@ -27,15 +27,16 @@ static int zfcp_erp_adisc(struct zfcp_port *); static void zfcp_erp_adisc_handler(unsigned long); static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *, int, u8, - u64); + void *); static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *, int, u8, - u64); -static int zfcp_erp_port_reopen_internal(struct zfcp_port *, int, u8, u64); -static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *, int, u8, u64); + void *); +static int zfcp_erp_port_reopen_internal(struct zfcp_port *, int, u8, void *); +static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *, int, u8, void *); static int zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *, int, u8, - u64); -static int zfcp_erp_unit_reopen_all_internal(struct zfcp_port *, int, u8, u64); + void *); +static int zfcp_erp_unit_reopen_all_internal(struct zfcp_port *, int, u8, + void *); static void zfcp_erp_adapter_block(struct zfcp_adapter *, int); static void zfcp_erp_adapter_unblock(struct zfcp_adapter *); @@ -101,7 +102,7 @@ static void zfcp_erp_action_dismiss(struct zfcp_erp_action *); static int zfcp_erp_action_enqueue(int, struct zfcp_adapter *, struct zfcp_port *, struct zfcp_unit *, - u8 id, u64 ref); + u8 id, void *ref); static int zfcp_erp_action_dequeue(struct zfcp_erp_action *); static void zfcp_erp_action_cleanup(int, struct zfcp_adapter *, struct zfcp_port *, struct zfcp_unit *, @@ -165,7 +166,7 @@ static void zfcp_close_fsf(struct zfcp_adapter *adapter) /* reset FSF request sequence number */ adapter->fsf_req_seq_no = 0; /* all ports and units are closed */ - zfcp_erp_modify_adapter_status(adapter, 24, 0, + zfcp_erp_modify_adapter_status(adapter, 24, NULL, ZFCP_STATUS_COMMON_OPEN, ZFCP_CLEAR); } @@ -181,7 +182,8 @@ static void zfcp_close_fsf(struct zfcp_adapter *adapter) static void zfcp_fsf_request_timeout_handler(unsigned long data) { struct zfcp_adapter *adapter = (struct zfcp_adapter *) data; - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 62, 0); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 62, + NULL); } void zfcp_fsf_start_timer(struct zfcp_fsf_req *fsf_req, unsigned long timeout) @@ -203,7 +205,7 @@ void zfcp_fsf_start_timer(struct zfcp_fsf_req *fsf_req, unsigned long timeout) * <0 - failed to initiate action */ static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, - int clear_mask, u8 id, u64 ref) + int clear_mask, u8 id, void *ref) { int retval; @@ -216,7 +218,7 @@ static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, ZFCP_LOG_DEBUG("skipped reopen of failed adapter %s\n", zfcp_get_busid_by_adapter(adapter)); /* ensure propagation of failed status to new devices */ - zfcp_erp_adapter_failed(adapter, 13, 0); + zfcp_erp_adapter_failed(adapter, 13, NULL); retval = -EIO; goto out; } @@ -237,7 +239,7 @@ static int zfcp_erp_adapter_reopen_internal(struct zfcp_adapter *adapter, * <0 - failed to initiate action */ int zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear_mask, - u8 id, u64 ref) + u8 id, void *ref) { int retval; unsigned long flags; @@ -252,7 +254,7 @@ int zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear_mask, } int zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear_mask, - u8 id, u64 ref) + u8 id, void *ref) { int retval; @@ -265,7 +267,7 @@ int zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear_mask, } int zfcp_erp_port_shutdown(struct zfcp_port *port, int clear_mask, u8 id, - u64 ref) + void *ref) { int retval; @@ -278,7 +280,7 @@ int zfcp_erp_port_shutdown(struct zfcp_port *port, int clear_mask, u8 id, } int zfcp_erp_unit_shutdown(struct zfcp_unit *unit, int clear_mask, u8 id, - u64 ref) + void *ref) { int retval; @@ -399,7 +401,7 @@ zfcp_erp_adisc_handler(unsigned long data) "force physical port reopen " "(adapter %s, port d_id=0x%06x)\n", zfcp_get_busid_by_adapter(adapter), d_id); - if (zfcp_erp_port_forced_reopen(port, 0, 63, 0)) + if (zfcp_erp_port_forced_reopen(port, 0, 63, NULL)) ZFCP_LOG_NORMAL("failed reopen of port " "(adapter %s, wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), @@ -426,7 +428,7 @@ zfcp_erp_adisc_handler(unsigned long data) "adisc_resp_wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), port->wwpn, (wwn_t) adisc->wwpn); - if (zfcp_erp_port_reopen(port, 0, 64, 0)) + if (zfcp_erp_port_reopen(port, 0, 64, NULL)) ZFCP_LOG_NORMAL("failed reopen of port " "(adapter %s, wwpn=0x%016Lx)\n", zfcp_get_busid_by_port(port), @@ -460,7 +462,7 @@ zfcp_test_link(struct zfcp_port *port) ZFCP_LOG_NORMAL("reopen needed for port 0x%016Lx " "on adapter %s\n ", port->wwpn, zfcp_get_busid_by_port(port)); - retval = zfcp_erp_port_forced_reopen(port, 0, 65, 0); + retval = zfcp_erp_port_forced_reopen(port, 0, 65, NULL); if (retval != 0) { ZFCP_LOG_NORMAL("reopen of remote port 0x%016Lx " "on adapter %s failed\n", port->wwpn, @@ -484,7 +486,8 @@ zfcp_test_link(struct zfcp_port *port) * <0 - failed to initiate action */ static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, - int clear_mask, u8 id, u64 ref) + int clear_mask, u8 id, + void *ref) { int retval; @@ -518,7 +521,7 @@ static int zfcp_erp_port_forced_reopen_internal(struct zfcp_port *port, * <0 - failed to initiate action */ int zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear_mask, u8 id, - u64 ref) + void *ref) { int retval; unsigned long flags; @@ -546,7 +549,7 @@ int zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear_mask, u8 id, * <0 - failed to initiate action */ static int zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask, - u8 id, u64 ref) + u8 id, void *ref) { int retval; @@ -560,7 +563,7 @@ static int zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask, "on adapter %s\n", port->wwpn, zfcp_get_busid_by_port(port)); /* ensure propagation of failed status to new devices */ - zfcp_erp_port_failed(port, 14, 0); + zfcp_erp_port_failed(port, 14, NULL); retval = -EIO; goto out; } @@ -582,7 +585,8 @@ static int zfcp_erp_port_reopen_internal(struct zfcp_port *port, int clear_mask, * correct locking. An error recovery task is initiated to do the reopen. * To wait for the completion of the reopen zfcp_erp_wait should be used. */ -int zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask, u8 id, u64 ref) +int zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask, u8 id, + void *ref) { int retval; unsigned long flags; @@ -608,7 +612,7 @@ int zfcp_erp_port_reopen(struct zfcp_port *port, int clear_mask, u8 id, u64 ref) * <0 - failed to initiate action */ static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask, - u8 id, u64 ref) + u8 id, void *ref) { int retval; struct zfcp_adapter *adapter = unit->port->adapter; @@ -644,7 +648,8 @@ static int zfcp_erp_unit_reopen_internal(struct zfcp_unit *unit, int clear_mask, * locking. An error recovery task is initiated to do the reopen. * To wait for the completion of the reopen zfcp_erp_wait should be used. */ -int zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask, u8 id, u64 ref) +int zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask, u8 id, + void *ref) { int retval; unsigned long flags; @@ -668,7 +673,7 @@ int zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear_mask, u8 id, u64 ref) */ static void zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int clear_mask) { - zfcp_erp_modify_adapter_status(adapter, 15, 0, + zfcp_erp_modify_adapter_status(adapter, 15, NULL, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); } @@ -704,7 +709,7 @@ static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) { if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status)) - zfcp_rec_dbf_event_adapter(16, 0, adapter); + zfcp_rec_dbf_event_adapter(16, NULL, adapter); } /* @@ -719,7 +724,7 @@ static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter) static void zfcp_erp_port_block(struct zfcp_port *port, int clear_mask) { - zfcp_erp_modify_port_status(port, 17, 0, + zfcp_erp_modify_port_status(port, 17, NULL, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); } @@ -736,7 +741,7 @@ zfcp_erp_port_unblock(struct zfcp_port *port) { if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status)) - zfcp_rec_dbf_event_port(18, 0, port); + zfcp_rec_dbf_event_port(18, NULL, port); } /* @@ -751,7 +756,7 @@ zfcp_erp_port_unblock(struct zfcp_port *port) static void zfcp_erp_unit_block(struct zfcp_unit *unit, int clear_mask) { - zfcp_erp_modify_unit_status(unit, 19, 0, + zfcp_erp_modify_unit_status(unit, 19, NULL, ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask, ZFCP_CLEAR); } @@ -768,7 +773,7 @@ zfcp_erp_unit_unblock(struct zfcp_unit *unit) { if (atomic_test_and_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status)) - zfcp_rec_dbf_event_unit(20, 0, unit); + zfcp_rec_dbf_event_unit(20, NULL, unit); } static void @@ -1140,7 +1145,7 @@ zfcp_erp_strategy(struct zfcp_erp_action *erp_action) "restarting I/O on adapter %s " "to free mempool\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_reopen_internal(adapter, 0, 66, 0); + zfcp_erp_adapter_reopen_internal(adapter, 0, 66, NULL); } else { retval = zfcp_erp_strategy_memwait(erp_action); } @@ -1295,7 +1300,7 @@ zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action) * */ void -zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, u8 id, u64 ref) +zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, u8 id, void *ref) { zfcp_erp_modify_adapter_status(adapter, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); @@ -1310,7 +1315,7 @@ zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, u8 id, u64 ref) * */ void -zfcp_erp_port_failed(struct zfcp_port *port, u8 id, u64 ref) +zfcp_erp_port_failed(struct zfcp_port *port, u8 id, void *ref) { zfcp_erp_modify_port_status(port, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); @@ -1331,7 +1336,7 @@ zfcp_erp_port_failed(struct zfcp_port *port, u8 id, u64 ref) * */ void -zfcp_erp_unit_failed(struct zfcp_unit *unit, u8 id, u64 ref) +zfcp_erp_unit_failed(struct zfcp_unit *unit, u8 id, void *ref) { zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET); @@ -1395,7 +1400,7 @@ zfcp_erp_strategy_statechange(int action, status)) { zfcp_erp_adapter_reopen_internal(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, - 67, 0); + 67, NULL); retval = ZFCP_ERP_EXIT; } break; @@ -1406,7 +1411,7 @@ zfcp_erp_strategy_statechange(int action, status)) { zfcp_erp_port_reopen_internal(port, ZFCP_STATUS_COMMON_ERP_FAILED, - 68, 0); + 68, NULL); retval = ZFCP_ERP_EXIT; } break; @@ -1416,7 +1421,7 @@ zfcp_erp_strategy_statechange(int action, status)) { zfcp_erp_unit_reopen_internal(unit, ZFCP_STATUS_COMMON_ERP_FAILED, - 69, 0); + 69, NULL); retval = ZFCP_ERP_EXIT; } break; @@ -1448,7 +1453,7 @@ zfcp_erp_strategy_check_unit(struct zfcp_unit *unit, int result) case ZFCP_ERP_FAILED : atomic_inc(&unit->erp_counter); if (atomic_read(&unit->erp_counter) > ZFCP_MAX_ERPS) - zfcp_erp_unit_failed(unit, 21, 0); + zfcp_erp_unit_failed(unit, 21, NULL); break; case ZFCP_ERP_EXIT : /* nothing */ @@ -1474,7 +1479,7 @@ zfcp_erp_strategy_check_port(struct zfcp_port *port, int result) case ZFCP_ERP_FAILED : atomic_inc(&port->erp_counter); if (atomic_read(&port->erp_counter) > ZFCP_MAX_ERPS) - zfcp_erp_port_failed(port, 22, 0); + zfcp_erp_port_failed(port, 22, NULL); break; case ZFCP_ERP_EXIT : /* nothing */ @@ -1500,7 +1505,7 @@ zfcp_erp_strategy_check_adapter(struct zfcp_adapter *adapter, int result) case ZFCP_ERP_FAILED : atomic_inc(&adapter->erp_counter); if (atomic_read(&adapter->erp_counter) > ZFCP_MAX_ERPS) - zfcp_erp_adapter_failed(adapter, 23, 0); + zfcp_erp_adapter_failed(adapter, 23, NULL); break; case ZFCP_ERP_EXIT : /* nothing */ @@ -1588,29 +1593,29 @@ zfcp_erp_strategy_followup_actions(int action, case ZFCP_ERP_ACTION_REOPEN_ADAPTER: if (status == ZFCP_ERP_SUCCEEDED) - zfcp_erp_port_reopen_all_internal(adapter, 0, 70, 0); + zfcp_erp_port_reopen_all_internal(adapter, 0, 70, NULL); else - zfcp_erp_adapter_reopen_internal(adapter, 0, 71, 0); + zfcp_erp_adapter_reopen_internal(adapter, 0, 71, NULL); break; case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: if (status == ZFCP_ERP_SUCCEEDED) - zfcp_erp_port_reopen_internal(port, 0, 72, 0); + zfcp_erp_port_reopen_internal(port, 0, 72, NULL); else - zfcp_erp_adapter_reopen_internal(adapter, 0, 73, 0); + zfcp_erp_adapter_reopen_internal(adapter, 0, 73, NULL); break; case ZFCP_ERP_ACTION_REOPEN_PORT: if (status == ZFCP_ERP_SUCCEEDED) - zfcp_erp_unit_reopen_all_internal(port, 0, 74, 0); + zfcp_erp_unit_reopen_all_internal(port, 0, 74, NULL); else - zfcp_erp_port_forced_reopen_internal(port, 0, 75, 0); + zfcp_erp_port_forced_reopen_internal(port, 0, 75, NULL); break; case ZFCP_ERP_ACTION_REOPEN_UNIT: /* Nothing to do if status == ZFCP_ERP_SUCCEEDED */ if (status != ZFCP_ERP_SUCCEEDED) - zfcp_erp_port_reopen_internal(unit->port, 0, 76, 0); + zfcp_erp_port_reopen_internal(unit->port, 0, 76, NULL); break; } @@ -1654,7 +1659,7 @@ zfcp_erp_wait(struct zfcp_adapter *adapter) } void zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, u8 id, - u64 ref, u32 mask, int set_or_clear) + void *ref, u32 mask, int set_or_clear) { struct zfcp_port *port; u32 changed, common_mask = mask & ZFCP_COMMON_FLAGS; @@ -1682,7 +1687,7 @@ void zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, u8 id, * purpose: sets the port and all underlying devices to ERP_FAILED * */ -void zfcp_erp_modify_port_status(struct zfcp_port *port, u8 id, u64 ref, +void zfcp_erp_modify_port_status(struct zfcp_port *port, u8 id, void *ref, u32 mask, int set_or_clear) { struct zfcp_unit *unit; @@ -1711,7 +1716,7 @@ void zfcp_erp_modify_port_status(struct zfcp_port *port, u8 id, u64 ref, * purpose: sets the unit to ERP_FAILED * */ -void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, u64 ref, +void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, void *ref, u32 mask, int set_or_clear) { u32 changed; @@ -1738,7 +1743,7 @@ void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, u64 ref, * <0 - failed to initiate action */ int zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter, int clear_mask, - u8 id, u64 ref) + u8 id, void *ref) { int retval; unsigned long flags; @@ -1754,7 +1759,7 @@ int zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter, int clear_mask, } static int zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *adapter, - int clear_mask, u8 id, u64 ref) + int clear_mask, u8 id, void *ref) { int retval = 0; struct zfcp_port *port; @@ -1775,7 +1780,7 @@ static int zfcp_erp_port_reopen_all_internal(struct zfcp_adapter *adapter, * returns: FIXME */ static int zfcp_erp_unit_reopen_all_internal(struct zfcp_port *port, - int clear_mask, u8 id, u64 ref) + int clear_mask, u8 id, void *ref) { int retval = 0; struct zfcp_unit *unit; @@ -2291,7 +2296,7 @@ zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *erp_action) port->wwpn, zfcp_get_busid_by_adapter(adapter), adapter->peer_wwpn); - zfcp_erp_port_failed(port, 25, 0); + zfcp_erp_port_failed(port, 25, NULL); retval = ZFCP_ERP_FAILED; break; } @@ -2318,7 +2323,7 @@ zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *erp_action) atomic_set_mask(ZFCP_STATUS_COMMON_RUNNING, &adapter->nameserver_port->status); if (zfcp_erp_port_reopen(adapter->nameserver_port, 0, - 77, (u64)erp_action) >= 0) { + 77, erp_action) >= 0) { erp_action->step = ZFCP_ERP_STEP_NAMESERVER_OPEN; retval = ZFCP_ERP_CONTINUES; @@ -2349,7 +2354,7 @@ zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *erp_action) "for port 0x%016Lx " "(misconfigured WWPN?)\n", port->wwpn); - zfcp_erp_port_failed(port, 26, 0); + zfcp_erp_port_failed(port, 26, NULL); retval = ZFCP_ERP_EXIT; } else { ZFCP_LOG_DEBUG("nameserver look-up failed for " @@ -2449,7 +2454,8 @@ zfcp_erp_port_strategy_open_nameserver_wakeup(struct zfcp_erp_action if (atomic_test_mask( ZFCP_STATUS_COMMON_ERP_FAILED, &adapter->nameserver_port->status)) - zfcp_erp_port_failed(erp_action->port, 27, 0); + zfcp_erp_port_failed(erp_action->port, 27, + NULL); zfcp_erp_action_ready(erp_action); } } @@ -2745,7 +2751,7 @@ void zfcp_erp_start_timer(struct zfcp_fsf_req *fsf_req) */ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, struct zfcp_port *port, - struct zfcp_unit *unit, u8 id, u64 ref) + struct zfcp_unit *unit, u8 id, void *ref) { int retval = 1, need = want; struct zfcp_erp_action *erp_action = NULL; @@ -2888,7 +2894,7 @@ static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter, zfcp_rec_dbf_event_thread(1, adapter, 0); retval = 0; out: - zfcp_rec_dbf_event_trigger(id, ref, want, need, (u64)erp_action, + zfcp_rec_dbf_event_trigger(id, ref, want, need, erp_action, adapter, port, unit); return retval; } @@ -3048,7 +3054,7 @@ static void zfcp_erp_action_to_ready(struct zfcp_erp_action *erp_action) zfcp_rec_dbf_event_action(146, erp_action); } -void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) +void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, void *ref) { unsigned long flags; @@ -3059,14 +3065,14 @@ void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, u64 ref) zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref); } -void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, u64 ref) +void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, void *ref) { zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET); zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref); } -void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, u64 ref) +void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, void *ref) { unsigned long flags; @@ -3077,7 +3083,7 @@ void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, u64 ref) read_unlock_irqrestore(&zfcp_data.config_lock, flags); } -void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, u64 ref) +void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, void *ref) { zfcp_erp_modify_unit_status(unit, id, ref, ZFCP_STATUS_COMMON_ERP_FAILED | @@ -3085,7 +3091,7 @@ void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, u64 ref) } void zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter, u8 id, - u64 ref) + void *ref) { struct zfcp_port *port; unsigned long flags; @@ -3102,7 +3108,7 @@ void zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter, u8 id, read_unlock_irqrestore(&zfcp_data.config_lock, flags); } -void zfcp_erp_port_access_changed(struct zfcp_port *port, u8 id, u64 ref) +void zfcp_erp_port_access_changed(struct zfcp_port *port, u8 id, void *ref) { struct zfcp_adapter *adapter = port->adapter; struct zfcp_unit *unit; @@ -3126,7 +3132,7 @@ void zfcp_erp_port_access_changed(struct zfcp_port *port, u8 id, u64 ref) zfcp_get_busid_by_adapter(adapter), port->wwpn); } -void zfcp_erp_unit_access_changed(struct zfcp_unit *unit, u8 id, u64 ref) +void zfcp_erp_unit_access_changed(struct zfcp_unit *unit, u8 id, void *ref) { struct zfcp_adapter *adapter = unit->port->adapter; diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h index 6de0147d84d8..6abf178fda5d 100644 --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -131,23 +131,25 @@ extern int zfcp_scsi_command_sync(struct zfcp_unit *, struct scsi_cmnd *, int); extern struct fc_function_template zfcp_transport_functions; /******************************** ERP ****************************************/ -extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, u8, u64, u32, - int); -extern int zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, u8, u64); -extern int zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, u8, u64); -extern void zfcp_erp_adapter_failed(struct zfcp_adapter *, u8, u64); - -extern void zfcp_erp_modify_port_status(struct zfcp_port *, u8, u64, u32, int); -extern int zfcp_erp_port_reopen(struct zfcp_port *, int, u8, u64); -extern int zfcp_erp_port_shutdown(struct zfcp_port *, int, u8, u64); -extern int zfcp_erp_port_forced_reopen(struct zfcp_port *, int, u8, u64); -extern void zfcp_erp_port_failed(struct zfcp_port *, u8, u64); -extern int zfcp_erp_port_reopen_all(struct zfcp_adapter *, int, u8, u64); - -extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, u8, u64, u32, int); -extern int zfcp_erp_unit_reopen(struct zfcp_unit *, int, u8, u64); -extern int zfcp_erp_unit_shutdown(struct zfcp_unit *, int, u8, u64); -extern void zfcp_erp_unit_failed(struct zfcp_unit *, u8, u64); +extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, u8, void *, + u32, int); +extern int zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, u8, void *); +extern int zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, u8, void *); +extern void zfcp_erp_adapter_failed(struct zfcp_adapter *, u8, void *); + +extern void zfcp_erp_modify_port_status(struct zfcp_port *, u8, void *, u32, + int); +extern int zfcp_erp_port_reopen(struct zfcp_port *, int, u8, void *); +extern int zfcp_erp_port_shutdown(struct zfcp_port *, int, u8, void *); +extern int zfcp_erp_port_forced_reopen(struct zfcp_port *, int, u8, void *); +extern void zfcp_erp_port_failed(struct zfcp_port *, u8, void *); +extern int zfcp_erp_port_reopen_all(struct zfcp_adapter *, int, u8, void *); + +extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, u8, void *, u32, + int); +extern int zfcp_erp_unit_reopen(struct zfcp_unit *, int, u8, void *); +extern int zfcp_erp_unit_shutdown(struct zfcp_unit *, int, u8, void *); +extern void zfcp_erp_unit_failed(struct zfcp_unit *, u8, void *); extern int zfcp_erp_thread_setup(struct zfcp_adapter *); extern int zfcp_erp_thread_kill(struct zfcp_adapter *); @@ -156,22 +158,22 @@ extern void zfcp_erp_async_handler(struct zfcp_erp_action *, unsigned long); extern int zfcp_test_link(struct zfcp_port *); -extern void zfcp_erp_port_boxed(struct zfcp_port *, u8 id, u64 ref); -extern void zfcp_erp_unit_boxed(struct zfcp_unit *, u8 id, u64 ref); -extern void zfcp_erp_port_access_denied(struct zfcp_port *, u8 id, u64 ref); -extern void zfcp_erp_unit_access_denied(struct zfcp_unit *, u8 id, u64 ref); -extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *, u8, u64); -extern void zfcp_erp_port_access_changed(struct zfcp_port *, u8, u64); -extern void zfcp_erp_unit_access_changed(struct zfcp_unit *, u8, u64); +extern void zfcp_erp_port_boxed(struct zfcp_port *, u8 id, void *ref); +extern void zfcp_erp_unit_boxed(struct zfcp_unit *, u8 id, void *ref); +extern void zfcp_erp_port_access_denied(struct zfcp_port *, u8 id, void *ref); +extern void zfcp_erp_unit_access_denied(struct zfcp_unit *, u8 id, void *ref); +extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *, u8, void *); +extern void zfcp_erp_port_access_changed(struct zfcp_port *, u8, void *); +extern void zfcp_erp_unit_access_changed(struct zfcp_unit *, u8, void *); /******************************** AUX ****************************************/ extern void zfcp_rec_dbf_event_thread(u8 id, struct zfcp_adapter *adapter, int lock); -extern void zfcp_rec_dbf_event_adapter(u8 id, u64 ref, struct zfcp_adapter *); -extern void zfcp_rec_dbf_event_port(u8 id, u64 ref, struct zfcp_port *port); -extern void zfcp_rec_dbf_event_unit(u8 id, u64 ref, struct zfcp_unit *unit); -extern void zfcp_rec_dbf_event_trigger(u8 id, u64 ref, u8 want, u8 need, - u64 action, struct zfcp_adapter *, +extern void zfcp_rec_dbf_event_adapter(u8 id, void *ref, struct zfcp_adapter *); +extern void zfcp_rec_dbf_event_port(u8 id, void *ref, struct zfcp_port *port); +extern void zfcp_rec_dbf_event_unit(u8 id, void *ref, struct zfcp_unit *unit); +extern void zfcp_rec_dbf_event_trigger(u8 id, void *ref, u8 want, u8 need, + void *action, struct zfcp_adapter *, struct zfcp_port *, struct zfcp_unit *); extern void zfcp_rec_dbf_event_action(u8 id, struct zfcp_erp_action *); diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index b7aa9696ba60..3211dcc59543 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -298,7 +298,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_adapter(adapter), prot_status_qual->version_error.fsf_version, ZFCP_QTCB_VERSION); - zfcp_erp_adapter_shutdown(adapter, 0, 117, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 117, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -309,7 +309,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) qtcb->prefix.req_seq_no, zfcp_get_busid_by_adapter(adapter), prot_status_qual->sequence_error.exp_req_seq_no); - zfcp_erp_adapter_reopen(adapter, 0, 98, (u64)fsf_req); + zfcp_erp_adapter_reopen(adapter, 0, 98, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_RETRY; fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -320,7 +320,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "that used on adapter %s. " "Stopping all operations on this adapter.\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 118, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 118, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -337,7 +337,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) *(unsigned long long*) (&qtcb->bottom.support.req_handle), zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 78, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 78, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -345,7 +345,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) zfcp_fsf_link_down_info_eval(fsf_req, 37, &prot_status_qual->link_down_info); /* FIXME: reopening adapter now? better wait for link up */ - zfcp_erp_adapter_reopen(adapter, 0, 79, (u64)fsf_req); + zfcp_erp_adapter_reopen(adapter, 0, 79, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -355,13 +355,13 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "Re-starting operations on this adapter.\n", zfcp_get_busid_by_adapter(adapter)); /* All ports should be marked as ready to run again */ - zfcp_erp_modify_adapter_status(adapter, 28, - 0, ZFCP_STATUS_COMMON_RUNNING, + zfcp_erp_modify_adapter_status(adapter, 28, NULL, + ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | ZFCP_STATUS_COMMON_ERP_FAILED, - 99, (u64)fsf_req); + 99, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -371,7 +371,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "Restarting all operations on this " "adapter.\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_reopen(adapter, 0, 100, (u64)fsf_req); + zfcp_erp_adapter_reopen(adapter, 0, 100, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_RETRY; fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -384,7 +384,7 @@ zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *fsf_req) "(debug info 0x%x).\n", zfcp_get_busid_by_adapter(adapter), qtcb->prefix.prot_status); - zfcp_erp_adapter_shutdown(adapter, 0, 119, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 119, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; } @@ -423,8 +423,7 @@ zfcp_fsf_fsfstatus_eval(struct zfcp_fsf_req *fsf_req) "(debug info 0x%x).\n", zfcp_get_busid_by_adapter(fsf_req->adapter), fsf_req->qtcb->header.fsf_command); - zfcp_erp_adapter_shutdown(fsf_req->adapter, 0, 120, - (u64)fsf_req); + zfcp_erp_adapter_shutdown(fsf_req->adapter, 0, 120, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -478,8 +477,7 @@ zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *fsf_req) "problem on the adapter %s " "Stopping all operations on this adapter. ", zfcp_get_busid_by_adapter(fsf_req->adapter)); - zfcp_erp_adapter_shutdown(fsf_req->adapter, 0, 121, - (u64)fsf_req); + zfcp_erp_adapter_shutdown(fsf_req->adapter, 0, 121, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; case FSF_SQ_ULP_PROGRAMMING_ERROR: @@ -605,7 +603,7 @@ zfcp_fsf_link_down_info_eval(struct zfcp_fsf_req *fsf_req, u8 id, link_down->vendor_specific_code); out: - zfcp_erp_adapter_failed(adapter, id, (u64)fsf_req); + zfcp_erp_adapter_failed(adapter, id, fsf_req); } /* @@ -799,11 +797,11 @@ zfcp_fsf_status_read_port_closed(struct zfcp_fsf_req *fsf_req) switch (status_buffer->status_subtype) { case FSF_STATUS_READ_SUB_CLOSE_PHYS_PORT: - zfcp_erp_port_reopen(port, 0, 101, (u64)fsf_req); + zfcp_erp_port_reopen(port, 0, 101, fsf_req); break; case FSF_STATUS_READ_SUB_ERROR_PORT: - zfcp_erp_port_shutdown(port, 0, 122, (u64)fsf_req); + zfcp_erp_port_shutdown(port, 0, 122, fsf_req); break; default: @@ -929,13 +927,13 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) "Restarting operations on this adapter\n", zfcp_get_busid_by_adapter(adapter)); /* All ports should be marked as ready to run again */ - zfcp_erp_modify_adapter_status(adapter, 30, 0, + zfcp_erp_modify_adapter_status(adapter, 30, NULL, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | ZFCP_STATUS_COMMON_ERP_FAILED, - 102, (u64)fsf_req); + 102, fsf_req); break; case FSF_STATUS_READ_NOTIFICATION_LOST: @@ -969,14 +967,13 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) if (status_buffer->status_subtype & FSF_STATUS_READ_SUB_ACT_UPDATED) - zfcp_erp_adapter_access_changed(adapter, 135, - (u64)fsf_req); + zfcp_erp_adapter_access_changed(adapter, 135, fsf_req); break; case FSF_STATUS_READ_CFDC_UPDATED: ZFCP_LOG_NORMAL("CFDC has been updated on the adapter %s\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_access_changed(adapter, 136, (u64)fsf_req); + zfcp_erp_adapter_access_changed(adapter, 136, fsf_req); break; case FSF_STATUS_READ_CFDC_HARDENED: @@ -1044,7 +1041,7 @@ zfcp_fsf_status_read_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_INFO("restart adapter %s due to status read " "buffer shortage\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_reopen(adapter, 0, 103, (u64)fsf_req); + zfcp_erp_adapter_reopen(adapter, 0, 103, fsf_req); } } out: @@ -1164,7 +1161,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) sizeof (union fsf_status_qual)); /* Let's hope this sorts out the mess */ zfcp_erp_adapter_reopen(unit->port->adapter, 0, 104, - (u64)new_fsf_req); + new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; } break; @@ -1192,8 +1189,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) fsf_status_qual, sizeof (union fsf_status_qual)); /* Let's hope this sorts out the mess */ - zfcp_erp_port_reopen(unit->port, 0, 105, - (u64)new_fsf_req); + zfcp_erp_port_reopen(unit->port, 0, 105, new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; } break; @@ -1207,7 +1203,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) ZFCP_LOG_INFO("Remote port 0x%016Lx on adapter %s needs to " "be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - zfcp_erp_port_boxed(unit->port, 47, (u64)new_fsf_req); + zfcp_erp_port_boxed(unit->port, 47, new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -1218,7 +1214,7 @@ zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) "to be reopened\n", unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - zfcp_erp_unit_boxed(unit, 48, (u64)new_fsf_req); + zfcp_erp_unit_boxed(unit, 48, new_fsf_req); new_fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -1452,7 +1448,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_port(port), ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ - zfcp_erp_adapter_shutdown(adapter, 0, 123, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 123, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1492,7 +1488,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) break; } } - zfcp_erp_port_access_denied(port, 55, (u64)fsf_req); + zfcp_erp_port_access_denied(port, 55, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1516,7 +1512,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_INFO, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(adapter, 0, 106, (u64)fsf_req); + zfcp_erp_adapter_reopen(adapter, 0, 106, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1524,7 +1520,7 @@ zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_INFO("port needs to be reopened " "(adapter %s, port d_id=0x%06x)\n", zfcp_get_busid_by_port(port), port->d_id); - zfcp_erp_port_boxed(port, 49, (u64)fsf_req); + zfcp_erp_port_boxed(port, 49, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -1746,7 +1742,7 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_adapter(adapter), ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ - zfcp_erp_adapter_shutdown(adapter, 0, 124, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 124, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -1842,7 +1838,7 @@ static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) } } if (port != NULL) - zfcp_erp_port_access_denied(port, 56, (u64)fsf_req); + zfcp_erp_port_access_denied(port, 56, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2060,7 +2056,7 @@ zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *fsf_req, int xchg_ok) "versions in comparison to this device " "driver (try updated device driver)\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 125, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 125, fsf_req); return -EIO; } if (ZFCP_QTCB_VERSION > bottom->high_qtcb_version) { @@ -2069,7 +2065,7 @@ zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *fsf_req, int xchg_ok) "versions than this device driver uses" "(consider a microcode upgrade)\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 126, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 126, fsf_req); return -EIO; } return 0; @@ -2115,7 +2111,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) "topology detected at adapter %s " "unsupported, shutting down adapter\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 127, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 127, fsf_req); return -EIO; case FC_PORTTYPE_NPORT: ZFCP_LOG_NORMAL("Switched fabric fibrechannel " @@ -2130,7 +2126,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) "of a type known to the zfcp " "driver, shutting down adapter\n", zfcp_get_busid_by_adapter(adapter)); - zfcp_erp_adapter_shutdown(adapter, 0, 128, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 128, fsf_req); return -EIO; } bottom = &qtcb->bottom.config; @@ -2142,7 +2138,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) bottom->max_qtcb_size, zfcp_get_busid_by_adapter(adapter), sizeof(struct fsf_qtcb)); - zfcp_erp_adapter_shutdown(adapter, 0, 129, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 129, fsf_req); return -EIO; } atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, @@ -2159,7 +2155,7 @@ zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *fsf_req) &qtcb->header.fsf_status_qual.link_down_info); break; default: - zfcp_erp_adapter_shutdown(adapter, 0, 130, (u64)fsf_req); + zfcp_erp_adapter_shutdown(adapter, 0, 130, fsf_req); return -EIO; } return 0; @@ -2458,7 +2454,7 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) break; } } - zfcp_erp_port_access_denied(port, 57, (u64)fsf_req); + zfcp_erp_port_access_denied(port, 57, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2467,7 +2463,7 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) "The remote port 0x%016Lx on adapter %s " "could not be opened. Disabling it.\n", port->wwpn, zfcp_get_busid_by_port(port)); - zfcp_erp_port_failed(port, 31, (u64)fsf_req); + zfcp_erp_port_failed(port, 31, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2487,7 +2483,7 @@ zfcp_fsf_open_port_handler(struct zfcp_fsf_req *fsf_req) "Disabling it.\n", port->wwpn, zfcp_get_busid_by_port(port)); - zfcp_erp_port_failed(port, 32, (u64)fsf_req); + zfcp_erp_port_failed(port, 32, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; default: @@ -2669,7 +2665,7 @@ zfcp_fsf_close_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &fsf_req->qtcb->header.fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(port->adapter, 0, 107, (u64)fsf_req); + zfcp_erp_adapter_reopen(port->adapter, 0, 107, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2684,7 +2680,7 @@ zfcp_fsf_close_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_TRACE("remote port 0x016%Lx on adapter %s closed, " "port handle 0x%x\n", port->wwpn, zfcp_get_busid_by_port(port), port->handle); - zfcp_erp_modify_port_status(port, 33, (u64)fsf_req, + zfcp_erp_modify_port_status(port, 33, fsf_req, ZFCP_STATUS_COMMON_OPEN, ZFCP_CLEAR); retval = 0; @@ -2806,7 +2802,7 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(port->adapter, 0, 108, (u64)fsf_req); + zfcp_erp_adapter_reopen(port->adapter, 0, 108, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2827,7 +2823,7 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) break; } } - zfcp_erp_port_access_denied(port, 58, (u64)fsf_req); + zfcp_erp_port_access_denied(port, 58, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -2837,7 +2833,7 @@ zfcp_fsf_close_physical_port_handler(struct zfcp_fsf_req *fsf_req) "to close it physically.\n", port->wwpn, zfcp_get_busid_by_port(port)); - zfcp_erp_port_boxed(port, 50, (u64)fsf_req); + zfcp_erp_port_boxed(port, 50, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; @@ -3016,8 +3012,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(unit->port->adapter, 0, 109, - (u64)fsf_req); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 109, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3047,7 +3042,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) break; } } - zfcp_erp_unit_access_denied(unit, 59, (u64)fsf_req); + zfcp_erp_unit_access_denied(unit, 59, fsf_req); atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status); atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3057,7 +3052,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_DEBUG("The remote port 0x%016Lx on adapter %s " "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - zfcp_erp_port_boxed(unit->port, 51, (u64)fsf_req); + zfcp_erp_port_boxed(unit->port, 51, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -3097,7 +3092,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_unit_access_denied(unit, 60, (u64)fsf_req); + zfcp_erp_unit_access_denied(unit, 60, fsf_req); atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status); atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; @@ -3111,7 +3106,7 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - zfcp_erp_unit_failed(unit, 34, (u64)fsf_req); + zfcp_erp_unit_failed(unit, 34, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3181,17 +3176,15 @@ zfcp_fsf_open_unit_handler(struct zfcp_fsf_req *fsf_req) if (exclusive && !readwrite) { ZFCP_LOG_NORMAL("exclusive access of read-only " "unit not supported\n"); - zfcp_erp_unit_failed(unit, 35, (u64)fsf_req); + zfcp_erp_unit_failed(unit, 35, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; - zfcp_erp_unit_shutdown(unit, 0, 80, - (u64)fsf_req); + zfcp_erp_unit_shutdown(unit, 0, 80, fsf_req); } else if (!exclusive && readwrite) { ZFCP_LOG_NORMAL("shared access of read-write " "unit not supported\n"); - zfcp_erp_unit_failed(unit, 36, (u64)fsf_req); + zfcp_erp_unit_failed(unit, 36, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; - zfcp_erp_unit_shutdown(unit, 0, 81, - (u64)fsf_req); + zfcp_erp_unit_shutdown(unit, 0, 81, fsf_req); } } @@ -3314,8 +3307,7 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &fsf_req->qtcb->header.fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(unit->port->adapter, 0, 110, - (u64)fsf_req); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 110, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3331,7 +3323,7 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &fsf_req->qtcb->header.fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_port_reopen(unit->port, 0, 111, (u64)fsf_req); + zfcp_erp_port_reopen(unit->port, 0, 111, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3340,7 +3332,7 @@ zfcp_fsf_close_unit_handler(struct zfcp_fsf_req *fsf_req) "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - zfcp_erp_port_boxed(unit->port, 52, (u64)fsf_req); + zfcp_erp_port_boxed(unit->port, 52, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -3534,7 +3526,7 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter, zfcp_get_busid_by_unit(unit), unit->port->wwpn, unit->fcp_lun); - zfcp_erp_unit_shutdown(unit, 0, 131, (u64)fsf_req); + zfcp_erp_unit_shutdown(unit, 0, 131, fsf_req); retval = -EINVAL; } goto no_fit; @@ -3692,8 +3684,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(unit->port->adapter, 0, 112, - (u64)fsf_req); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 112, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3709,7 +3700,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_port_reopen(unit->port, 0, 113, (u64)fsf_req); + zfcp_erp_port_reopen(unit->port, 0, 113, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3725,8 +3716,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_adapter_reopen(unit->port->adapter, 0, 114, - (u64)fsf_req); + zfcp_erp_adapter_reopen(unit->port->adapter, 0, 114, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3736,8 +3726,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), ZFCP_FC_SERVICE_CLASS_DEFAULT); /* stop operation for this adapter */ - zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 132, - (u64)fsf_req); + zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 132, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3753,7 +3742,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, (char *) &header->fsf_status_qual, sizeof (union fsf_status_qual)); - zfcp_erp_port_reopen(unit->port, 0, 115, (u64)fsf_req); + zfcp_erp_port_reopen(unit->port, 0, 115, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3775,7 +3764,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) break; } } - zfcp_erp_unit_access_denied(unit, 61, (u64)fsf_req); + zfcp_erp_unit_access_denied(unit, 61, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3788,8 +3777,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), fsf_req->qtcb->bottom.io.data_direction); /* stop operation for this adapter */ - zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 133, - (u64)fsf_req); + zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 133, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3802,8 +3790,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) zfcp_get_busid_by_unit(unit), fsf_req->qtcb->bottom.io.fcp_cmnd_length); /* stop operation for this adapter */ - zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 134, - (u64)fsf_req); + zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 134, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; break; @@ -3811,7 +3798,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) ZFCP_LOG_DEBUG("The remote port 0x%016Lx on adapter %s " "needs to be reopened\n", unit->port->wwpn, zfcp_get_busid_by_unit(unit)); - zfcp_erp_port_boxed(unit->port, 53, (u64)fsf_req); + zfcp_erp_port_boxed(unit->port, 53, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -3821,7 +3808,7 @@ zfcp_fsf_send_fcp_command_handler(struct zfcp_fsf_req *fsf_req) "wwpn=0x%016Lx, fcp_lun=0x%016Lx)\n", zfcp_get_busid_by_unit(unit), unit->port->wwpn, unit->fcp_lun); - zfcp_erp_unit_boxed(unit, 54, (u64)fsf_req); + zfcp_erp_unit_boxed(unit, 54, fsf_req); fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_RETRY; break; @@ -4681,7 +4668,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *fsf_req) req_queue->free_index -= fsf_req->sbal_number; req_queue->free_index += QDIO_MAX_BUFFERS_PER_Q; req_queue->free_index %= QDIO_MAX_BUFFERS_PER_Q; /* wrap */ - zfcp_erp_adapter_reopen(adapter, 0, 116, (u64)fsf_req); + zfcp_erp_adapter_reopen(adapter, 0, 116, fsf_req); } else { req_queue->distance_from_int = new_distance_from_int; /* diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c index 5d60a4116aff..8ca5f074c687 100644 --- a/drivers/s390/scsi/zfcp_qdio.c +++ b/drivers/s390/scsi/zfcp_qdio.c @@ -176,7 +176,8 @@ zfcp_qdio_handler_error_check(struct zfcp_adapter *adapter, unsigned int status, */ zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | - ZFCP_STATUS_COMMON_ERP_FAILED, 140, 0); + ZFCP_STATUS_COMMON_ERP_FAILED, 140, + NULL); } return retval; } diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c index cd844b2ad7a1..3c9880e46e81 100644 --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -185,7 +185,7 @@ static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt) atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status); sdpnt->hostdata = NULL; unit->device = NULL; - zfcp_erp_unit_failed(unit, 12, 0); + zfcp_erp_unit_failed(unit, 12, NULL); zfcp_unit_put(unit); } else ZFCP_LOG_NORMAL("bug: no unit associated with SCSI device at " @@ -529,7 +529,7 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt) unit->fcp_lun, unit->port->wwpn, zfcp_get_busid_by_adapter(unit->port->adapter)); - zfcp_erp_adapter_reopen(adapter, 0, 141, (u64)scpnt); + zfcp_erp_adapter_reopen(adapter, 0, 141, scpnt); zfcp_erp_wait(adapter); return SUCCESS; diff --git a/drivers/s390/scsi/zfcp_sysfs_adapter.c b/drivers/s390/scsi/zfcp_sysfs_adapter.c index e0bbcc440a52..ccbba4dd3a77 100644 --- a/drivers/s390/scsi/zfcp_sysfs_adapter.c +++ b/drivers/s390/scsi/zfcp_sysfs_adapter.c @@ -89,7 +89,7 @@ zfcp_sysfs_port_add_store(struct device *dev, struct device_attribute *attr, con retval = 0; - zfcp_erp_port_reopen(port, 0, 91, 0); + zfcp_erp_port_reopen(port, 0, 91, NULL); zfcp_erp_wait(port->adapter); zfcp_port_put(port); out: @@ -147,7 +147,7 @@ zfcp_sysfs_port_remove_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_port_shutdown(port, 0, 92, 0); + zfcp_erp_port_shutdown(port, 0, 92, NULL); zfcp_erp_wait(adapter); zfcp_port_put(port); zfcp_port_dequeue(port); @@ -191,9 +191,10 @@ zfcp_sysfs_adapter_failed_store(struct device *dev, struct device_attribute *att goto out; } - zfcp_erp_modify_adapter_status(adapter, 44, 0, + zfcp_erp_modify_adapter_status(adapter, 44, NULL, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 93, 0); + zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 93, + NULL); zfcp_erp_wait(adapter); out: up(&zfcp_data.config_sema); diff --git a/drivers/s390/scsi/zfcp_sysfs_port.c b/drivers/s390/scsi/zfcp_sysfs_port.c index 538195034c68..703c1b5cb602 100644 --- a/drivers/s390/scsi/zfcp_sysfs_port.c +++ b/drivers/s390/scsi/zfcp_sysfs_port.c @@ -94,7 +94,7 @@ zfcp_sysfs_unit_add_store(struct device *dev, struct device_attribute *attr, con retval = 0; - zfcp_erp_unit_reopen(unit, 0, 94, 0); + zfcp_erp_unit_reopen(unit, 0, 94, NULL); zfcp_erp_wait(unit->port->adapter); zfcp_unit_put(unit); out: @@ -150,7 +150,7 @@ zfcp_sysfs_unit_remove_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_unit_shutdown(unit, 0, 95, 0); + zfcp_erp_unit_shutdown(unit, 0, 95, NULL); zfcp_erp_wait(unit->port->adapter); zfcp_unit_put(unit); zfcp_unit_dequeue(unit); @@ -193,9 +193,9 @@ zfcp_sysfs_port_failed_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_modify_port_status(port, 45, 0, + zfcp_erp_modify_port_status(port, 45, NULL, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, 96, 0); + zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED, 96, NULL); zfcp_erp_wait(port->adapter); out: up(&zfcp_data.config_sema); diff --git a/drivers/s390/scsi/zfcp_sysfs_unit.c b/drivers/s390/scsi/zfcp_sysfs_unit.c index fd73568b44b4..80fb2c2cf48a 100644 --- a/drivers/s390/scsi/zfcp_sysfs_unit.c +++ b/drivers/s390/scsi/zfcp_sysfs_unit.c @@ -94,9 +94,9 @@ zfcp_sysfs_unit_failed_store(struct device *dev, struct device_attribute *attr, goto out; } - zfcp_erp_modify_unit_status(unit, 46, 0, + zfcp_erp_modify_unit_status(unit, 46, NULL, ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET); - zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, 97, 0); + zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, 97, NULL); zfcp_erp_wait(unit->port->adapter); out: up(&zfcp_data.config_sema); -- cgit v1.2.3-59-g8ed1b From 6071d7ec36054e78f02f7d5a66c3784aeb65ce92 Mon Sep 17 00:00:00 2001 From: Christof Schmitt Date: Fri, 18 Apr 2008 12:51:56 +0200 Subject: [SCSI] zfcp: Remove zfcp_erp_wait from slave destory handler to fix deadlock The testcase # chchp -v 0 0.da && sleep 59 && chchp -v 1 0.da results in this deadlock situation: STACK TRACE FOR TASK: 0x7e9a2048 (zfcperp0.0.c613) 0 schedule+816 [0x356b3c] 1 schedule_timeout+172 [0x357340] 2 wait_for_common+192 [0x3565fc] 3 flush_cpu_workqueue+116 [0x52af0] 4 flush_workqueue+116 [0x533b8] 5 fc_remote_port_add+64 [0x1c83ec] 6 zfcp_erp_thread+4534 [0x26585a] 7 kernel_thread_starter+6 [0x195d2] STACK TRACE FOR TASK: 0x7f8ec048 (fc_wq_0) 0 schedule+816 [0x356b3c] 1 zfcp_erp_wait+104 [0x264568] 2 zfcp_scsi_slave_destroy+64 [0x261b24] 3 __scsi_remove_device+154 [0x1c24ba] 4 scsi_remove_device+62 [0x1c2512] 5 __scsi_remove_target+198 [0x1c25ea] 6 __remove_child+58 [0x1c26d6] 7 device_for_each_child+66 [0x1ab566] 8 scsi_remove_target+98 [0x1c268a] 9 run_workqueue+200 [0x5272c] 10 worker_thread+146 [0x52882] 11 kthread+140 [0x58360] 12 kernel_thread_starter+6 [0x195d2] Remove the zfcp_erp_wait call that is not required here to prevent the deadlock situation. Signed-off-by: Christof Schmitt Signed-off-by: Martin Peschke Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_scsi.c | 1 - 1 file changed, 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c index 3c9880e46e81..f81850624eed 100644 --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -181,7 +181,6 @@ static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt) struct zfcp_unit *unit = (struct zfcp_unit *) sdpnt->hostdata; if (unit) { - zfcp_erp_wait(unit->port->adapter); atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status); sdpnt->hostdata = NULL; unit->device = NULL; -- cgit v1.2.3-59-g8ed1b From 57b7658aed76f1763416878ead9be4ffa288b7a3 Mon Sep 17 00:00:00 2001 From: Christof Schmitt Date: Fri, 18 Apr 2008 12:51:57 +0200 Subject: [SCSI] zfcp: Fix error handling for blocked unit for send FCP command In the case the unit is blocked, zfcp_unit_get has not been called yet, so the error handling path should not call zfcp_unit_put. Signed-off-by: Christof Schmitt Signed-off-by: Martin Peschke Signed-off-by: James Bottomley --- drivers/s390/scsi/zfcp_fsf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'drivers/s390') diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 3211dcc59543..7c3f02816e95 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -3562,8 +3562,8 @@ zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter, send_failed: no_fit: failed_scsi_cmnd: - unit_blocked: zfcp_unit_put(unit); + unit_blocked: zfcp_fsf_req_free(fsf_req); fsf_req = NULL; scsi_cmnd->host_scribble = NULL; -- cgit v1.2.3-59-g8ed1b