Commit cbeb0310 authored by Vishal Verma's avatar Vishal Verma
Browse files

Merge branch 'for-5.9/firmware-activate' into libnvdimm-for-next

This branch adds support for runtime firmware activation for NVDIMMs
that support it:

https://lore.kernel.org/linux-nvdimm/20200721104442.GF1676612@kroah.com/T/#m253296a85b1cd4ab8ad00efba4c3df2436437bf4
parents f4013ca6 a1facc1f
Loading
Loading
Loading
Loading
+19 −0
Original line number Diff line number Diff line
@@ -202,6 +202,25 @@ Description:
		functions. See the section named 'NVDIMM Root Device _DSMs' in
		the ACPI specification.

What:		/sys/bus/nd/devices/ndbusX/nfit/firmware_activate_noidle
Date:		Apr, 2020
KernelVersion:	v5.8
Contact:	linux-nvdimm@lists.01.org
Description:
		(RW) The Intel platform implementation of firmware activate
		support exposes an option let the platform force idle devices in
		the system over the activation event, or trust that the OS will
		do it. The safe default is to let the platform force idle
		devices since the kernel is already in a suspend state, and on
		the chance that a driver does not properly quiesce bus-mastering
		after a suspend callback the platform will handle it.  However,
		the activation might abort if, for example, platform firmware
		determines that the activation time exceeds the max PCI-E
		completion timeout. Since the platform does not know whether the
		OS is running the activation from a suspend context it aborts,
		but if the system owner trusts driver suspend callback to be
		sufficient then 'firmware_activation_noidle' can be
		enabled to bypass the activation abort.

What:		/sys/bus/nd/devices/regionX/nfit/range_index
Date:		Jun, 2015
+2 −0
Original line number Diff line number Diff line
The libnvdimm sub-system implements a common sysfs interface for
platform nvdimm resources. See Documentation/driver-api/nvdimm/.
+86 −0
Original line number Diff line number Diff line
.. SPDX-License-Identifier: GPL-2.0

==================================
NVDIMM Runtime Firmware Activation
==================================

Some persistent memory devices run a firmware locally on the device /
"DIMM" to perform tasks like media management, capacity provisioning,
and health monitoring. The process of updating that firmware typically
involves a reboot because it has implications for in-flight memory
transactions. However, reboots are disruptive and at least the Intel
persistent memory platform implementation, described by the Intel ACPI
DSM specification [1], has added support for activating firmware at
runtime.

A native sysfs interface is implemented in libnvdimm to allow platform
to advertise and control their local runtime firmware activation
capability.

The libnvdimm bus object, ndbusX, implements an ndbusX/firmware/activate
attribute that shows the state of the firmware activation as one of 'idle',
'armed', 'overflow', and 'busy'.

- idle:
  No devices are set / armed to activate firmware

- armed:
  At least one device is armed

- busy:
  In the busy state armed devices are in the process of transitioning
  back to idle and completing an activation cycle.

- overflow:
  If the platform has a concept of incremental work needed to perform
  the activation it could be the case that too many DIMMs are armed for
  activation. In that scenario the potential for firmware activation to
  timeout is indicated by the 'overflow' state.

The 'ndbusX/firmware/activate' property can be written with a value of
either 'live', or 'quiesce'. A value of 'quiesce' triggers the kernel to
run firmware activation from within the equivalent of the hibernation
'freeze' state where drivers and applications are notified to stop their
modifications of system memory. A value of 'live' attempts
firmware activation without this hibernation cycle. The
'ndbusX/firmware/activate' property will be elided completely if no
firmware activation capability is detected.

Another property 'ndbusX/firmware/capability' indicates a value of
'live' or 'quiesce', where 'live' indicates that the firmware
does not require or inflict any quiesce period on the system to update
firmware. A capability value of 'quiesce' indicates that firmware does
expect and injects a quiet period for the memory controller, but 'live'
may still be written to 'ndbusX/firmware/activate' as an override to
assume the risk of racing firmware update with in-flight device and
application activity. The 'ndbusX/firmware/capability' property will be
elided completely if no firmware activation capability is detected.

The libnvdimm memory-device / DIMM object, nmemX, implements
'nmemX/firmware/activate' and 'nmemX/firmware/result' attributes to
communicate the per-device firmware activation state. Similar to the
'ndbusX/firmware/activate' attribute, the 'nmemX/firmware/activate'
attribute indicates 'idle', 'armed', or 'busy'. The state transitions
from 'armed' to 'idle' when the system is prepared to activate firmware,
firmware staged + state set to armed, and 'ndbusX/firmware/activate' is
triggered. After that activation event the nmemX/firmware/result
attribute reflects the state of the last activation as one of:

- none:
  No runtime activation triggered since the last time the device was reset

- success:
  The last runtime activation completed successfully.

- fail:
  The last runtime activation failed for device-specific reasons.

- not_staged:
  The last runtime activation failed due to a sequencing error of the
  firmware image not being staged.

- need_reset:
  Runtime firmware activation failed, but the firmware can still be
  activated via the legacy method of power-cycling the system.

[1]: https://docs.pmem.io/persistent-memory/
+106 −36
Original line number Diff line number Diff line
@@ -73,6 +73,18 @@ const guid_t *to_nfit_uuid(enum nfit_uuids id)
}
EXPORT_SYMBOL(to_nfit_uuid);

static const guid_t *to_nfit_bus_uuid(int family)
{
	if (WARN_ONCE(family == NVDIMM_BUS_FAMILY_NFIT,
			"only secondary bus families can be translated\n"))
		return NULL;
	/*
	 * The index of bus UUIDs starts immediately following the last
	 * NVDIMM/leaf family.
	 */
	return to_nfit_uuid(family + NVDIMM_FAMILY_MAX);
}

static struct acpi_device *to_acpi_dev(struct acpi_nfit_desc *acpi_desc)
{
	struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
@@ -362,24 +374,8 @@ static u8 nfit_dsm_revid(unsigned family, unsigned func)
{
	static const u8 revid_table[NVDIMM_FAMILY_MAX+1][NVDIMM_CMD_MAX+1] = {
		[NVDIMM_FAMILY_INTEL] = {
			[NVDIMM_INTEL_GET_MODES] = 2,
			[NVDIMM_INTEL_GET_FWINFO] = 2,
			[NVDIMM_INTEL_START_FWUPDATE] = 2,
			[NVDIMM_INTEL_SEND_FWUPDATE] = 2,
			[NVDIMM_INTEL_FINISH_FWUPDATE] = 2,
			[NVDIMM_INTEL_QUERY_FWUPDATE] = 2,
			[NVDIMM_INTEL_SET_THRESHOLD] = 2,
			[NVDIMM_INTEL_INJECT_ERROR] = 2,
			[NVDIMM_INTEL_GET_SECURITY_STATE] = 2,
			[NVDIMM_INTEL_SET_PASSPHRASE] = 2,
			[NVDIMM_INTEL_DISABLE_PASSPHRASE] = 2,
			[NVDIMM_INTEL_UNLOCK_UNIT] = 2,
			[NVDIMM_INTEL_FREEZE_LOCK] = 2,
			[NVDIMM_INTEL_SECURE_ERASE] = 2,
			[NVDIMM_INTEL_OVERWRITE] = 2,
			[NVDIMM_INTEL_QUERY_OVERWRITE] = 2,
			[NVDIMM_INTEL_SET_MASTER_PASSPHRASE] = 2,
			[NVDIMM_INTEL_MASTER_SECURE_ERASE] = 2,
			[NVDIMM_INTEL_GET_MODES ...
				NVDIMM_INTEL_FW_ACTIVATE_ARM] = 2,
		},
	};
	u8 id;
@@ -406,7 +402,7 @@ static bool payload_dumpable(struct nvdimm *nvdimm, unsigned int func)
}

static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
		struct nd_cmd_pkg *call_pkg)
		struct nd_cmd_pkg *call_pkg, int *family)
{
	if (call_pkg) {
		int i;
@@ -417,6 +413,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
		for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++)
			if (call_pkg->nd_reserved2[i])
				return -EINVAL;
		*family = call_pkg->nd_family;
		return call_pkg->nd_command;
	}

@@ -450,13 +447,14 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
	acpi_handle handle;
	const guid_t *guid;
	int func, rc, i;
	int family = 0;

	if (cmd_rc)
		*cmd_rc = -EINVAL;

	if (cmd == ND_CMD_CALL)
		call_pkg = buf;
	func = cmd_to_func(nfit_mem, cmd, call_pkg);
	func = cmd_to_func(nfit_mem, cmd, call_pkg, &family);
	if (func < 0)
		return func;

@@ -478,9 +476,17 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,

		cmd_name = nvdimm_bus_cmd_name(cmd);
		cmd_mask = nd_desc->cmd_mask;
		dsm_mask = nd_desc->bus_dsm_mask;
		desc = nd_cmd_bus_desc(cmd);
		if (cmd == ND_CMD_CALL && call_pkg->nd_family) {
			family = call_pkg->nd_family;
			if (!test_bit(family, &nd_desc->bus_family_mask))
				return -EINVAL;
			dsm_mask = acpi_desc->family_dsm_mask[family];
			guid = to_nfit_bus_uuid(family);
		} else {
			dsm_mask = acpi_desc->bus_dsm_mask;
			guid = to_nfit_uuid(NFIT_DEV_BUS);
		}
		desc = nd_cmd_bus_desc(cmd);
		handle = adev->handle;
		dimm_name = "bus";
	}
@@ -516,8 +522,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
		in_buf.buffer.length = call_pkg->nd_size_in;
	}

	dev_dbg(dev, "%s cmd: %d: func: %d input length: %d\n",
		dimm_name, cmd, func, in_buf.buffer.length);
	dev_dbg(dev, "%s cmd: %d: family: %d func: %d input length: %d\n",
		dimm_name, cmd, family, func, in_buf.buffer.length);
	if (payload_dumpable(nvdimm, func))
		print_hex_dump_debug("nvdimm in  ", DUMP_PREFIX_OFFSET, 4, 4,
				in_buf.buffer.pointer,
@@ -1238,8 +1244,9 @@ static ssize_t bus_dsm_mask_show(struct device *dev,
{
	struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
	struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);

	return sprintf(buf, "%#lx\n", nd_desc->bus_dsm_mask);
	return sprintf(buf, "%#lx\n", acpi_desc->bus_dsm_mask);
}
static struct device_attribute dev_attr_bus_dsm_mask =
		__ATTR(dsm_mask, 0444, bus_dsm_mask_show, NULL);
@@ -1385,8 +1392,12 @@ static umode_t nfit_visible(struct kobject *kobj, struct attribute *a, int n)
	struct device *dev = container_of(kobj, struct device, kobj);
	struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);

	if (a == &dev_attr_scrub.attr && !ars_supported(nvdimm_bus))
		return 0;
	if (a == &dev_attr_scrub.attr)
		return ars_supported(nvdimm_bus) ? a->mode : 0;

	if (a == &dev_attr_firmware_activate_noidle.attr)
		return intel_fwa_supported(nvdimm_bus) ? a->mode : 0;

	return a->mode;
}

@@ -1395,6 +1406,7 @@ static struct attribute *acpi_nfit_attributes[] = {
	&dev_attr_scrub.attr,
	&dev_attr_hw_error_scrub.attr,
	&dev_attr_bus_dsm_mask.attr,
	&dev_attr_firmware_activate_noidle.attr,
	NULL,
};

@@ -1823,6 +1835,7 @@ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
		struct nfit_mem *nfit_mem, u32 device_handle)
{
	struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
	struct acpi_device *adev, *adev_dimm;
	struct device *dev = acpi_desc->dev;
	unsigned long dsm_mask, label_mask;
@@ -1834,6 +1847,7 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
	/* nfit test assumes 1:1 relationship between commands and dsms */
	nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en;
	nfit_mem->family = NVDIMM_FAMILY_INTEL;
	set_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);

	if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID)
		sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x",
@@ -1886,10 +1900,13 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
	 * Note, that checking for function0 (bit0) tells us if any commands
	 * are reachable through this GUID.
	 */
	clear_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
	for (i = 0; i <= NVDIMM_FAMILY_MAX; i++)
		if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1))
		if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) {
			set_bit(i, &nd_desc->dimm_family_mask);
			if (family < 0 || i == default_dsm_family)
				family = i;
		}

	/* limit the supported commands to those that are publicly documented */
	nfit_mem->family = family;
@@ -2007,6 +2024,26 @@ static const struct nvdimm_security_ops *acpi_nfit_get_security_ops(int family)
	}
}

static const struct nvdimm_fw_ops *acpi_nfit_get_fw_ops(
		struct nfit_mem *nfit_mem)
{
	unsigned long mask;
	struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
	struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;

	if (!nd_desc->fw_ops)
		return NULL;

	if (nfit_mem->family != NVDIMM_FAMILY_INTEL)
		return NULL;

	mask = nfit_mem->dsm_mask & NVDIMM_INTEL_FW_ACTIVATE_CMDMASK;
	if (mask != NVDIMM_INTEL_FW_ACTIVATE_CMDMASK)
		return NULL;

	return intel_fw_ops;
}

static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
{
	struct nfit_mem *nfit_mem;
@@ -2083,7 +2120,8 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
				acpi_nfit_dimm_attribute_groups,
				flags, cmd_mask, flush ? flush->hint_count : 0,
				nfit_mem->flush_wpq, &nfit_mem->id[0],
				acpi_nfit_get_security_ops(nfit_mem->family));
				acpi_nfit_get_security_ops(nfit_mem->family),
				acpi_nfit_get_fw_ops(nfit_mem));
		if (!nvdimm)
			return -ENOMEM;

@@ -2147,12 +2185,23 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
{
	struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
	const guid_t *guid = to_nfit_uuid(NFIT_DEV_BUS);
	unsigned long dsm_mask, *mask;
	struct acpi_device *adev;
	unsigned long dsm_mask;
	int i;

	set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
	set_bit(NVDIMM_BUS_FAMILY_NFIT, &nd_desc->bus_family_mask);

	/* enable nfit_test to inject bus command emulation */
	if (acpi_desc->bus_cmd_force_en) {
		nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en;
	nd_desc->bus_dsm_mask = acpi_desc->bus_nfit_cmd_force_en;
		mask = &nd_desc->bus_family_mask;
		if (acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL]) {
			set_bit(NVDIMM_BUS_FAMILY_INTEL, mask);
			nd_desc->fw_ops = intel_bus_fw_ops;
		}
	}

	adev = to_acpi_dev(acpi_desc);
	if (!adev)
		return;
@@ -2160,7 +2209,6 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
	for (i = ND_CMD_ARS_CAP; i <= ND_CMD_CLEAR_ERROR; i++)
		if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
			set_bit(i, &nd_desc->cmd_mask);
	set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);

	dsm_mask =
		(1 << ND_CMD_ARS_CAP) |
@@ -2173,7 +2221,20 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
		(1 << NFIT_CMD_ARS_INJECT_GET);
	for_each_set_bit(i, &dsm_mask, BITS_PER_LONG)
		if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
			set_bit(i, &nd_desc->bus_dsm_mask);
			set_bit(i, &acpi_desc->bus_dsm_mask);

	/* Enumerate allowed NVDIMM_BUS_FAMILY_INTEL commands */
	dsm_mask = NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK;
	guid = to_nfit_bus_uuid(NVDIMM_BUS_FAMILY_INTEL);
	mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL];
	for_each_set_bit(i, &dsm_mask, BITS_PER_LONG)
		if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
			set_bit(i, mask);

	if (*mask == dsm_mask) {
		set_bit(NVDIMM_BUS_FAMILY_INTEL, &nd_desc->bus_family_mask);
		nd_desc->fw_ops = intel_bus_fw_ops;
	}
}

static ssize_t range_index_show(struct device *dev,
@@ -3485,7 +3546,10 @@ static int __acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
	return 0;
}

/* prevent security commands from being issued via ioctl */
/*
 * Prevent security and firmware activate commands from being issued via
 * ioctl.
 */
static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
		struct nvdimm *nvdimm, unsigned int cmd, void *buf)
{
@@ -3496,10 +3560,15 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
			call_pkg->nd_family == NVDIMM_FAMILY_INTEL) {
		func = call_pkg->nd_command;
		if (func > NVDIMM_CMD_MAX ||
		    (1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK)
		    (1 << func) & NVDIMM_INTEL_DENY_CMDMASK)
			return -EOPNOTSUPP;
	}

	/* block all non-nfit bus commands */
	if (!nvdimm && cmd == ND_CMD_CALL &&
			call_pkg->nd_family != NVDIMM_BUS_FAMILY_NFIT)
		return -EOPNOTSUPP;

	return __acpi_nfit_clear_to_send(nd_desc, nvdimm, cmd);
}

@@ -3791,6 +3860,7 @@ static __init int nfit_init(void)
	guid_parse(UUID_NFIT_DIMM_N_HPE2, &nfit_uuid[NFIT_DEV_DIMM_N_HPE2]);
	guid_parse(UUID_NFIT_DIMM_N_MSFT, &nfit_uuid[NFIT_DEV_DIMM_N_MSFT]);
	guid_parse(UUID_NFIT_DIMM_N_HYPERV, &nfit_uuid[NFIT_DEV_DIMM_N_HYPERV]);
	guid_parse(UUID_INTEL_BUS, &nfit_uuid[NFIT_BUS_INTEL]);

	nfit_wq = create_singlethread_workqueue("nfit");
	if (!nfit_wq)
+386 −0
Original line number Diff line number Diff line
@@ -7,6 +7,48 @@
#include "intel.h"
#include "nfit.h"

static ssize_t firmware_activate_noidle_show(struct device *dev,
		struct device_attribute *attr, char *buf)
{
	struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
	struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);

	return sprintf(buf, "%s\n", acpi_desc->fwa_noidle ? "Y" : "N");
}

static ssize_t firmware_activate_noidle_store(struct device *dev,
		struct device_attribute *attr, const char *buf, size_t size)
{
	struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
	struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
	ssize_t rc;
	bool val;

	rc = kstrtobool(buf, &val);
	if (rc)
		return rc;
	if (val != acpi_desc->fwa_noidle)
		acpi_desc->fwa_cap = NVDIMM_FWA_CAP_INVALID;
	acpi_desc->fwa_noidle = val;
	return size;
}
DEVICE_ATTR_RW(firmware_activate_noidle);

bool intel_fwa_supported(struct nvdimm_bus *nvdimm_bus)
{
	struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
	unsigned long *mask;

	if (!test_bit(NVDIMM_BUS_FAMILY_INTEL, &nd_desc->bus_family_mask))
		return false;

	mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL];
	return *mask == NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK;
}

static unsigned long intel_security_flags(struct nvdimm *nvdimm,
		enum nvdimm_passphrase_type ptype)
{
@@ -389,3 +431,347 @@ static const struct nvdimm_security_ops __intel_security_ops = {
};

const struct nvdimm_security_ops *intel_security_ops = &__intel_security_ops;

static int intel_bus_fwa_businfo(struct nvdimm_bus_descriptor *nd_desc,
		struct nd_intel_bus_fw_activate_businfo *info)
{
	struct {
		struct nd_cmd_pkg pkg;
		struct nd_intel_bus_fw_activate_businfo cmd;
	} nd_cmd = {
		.pkg = {
			.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO,
			.nd_family = NVDIMM_BUS_FAMILY_INTEL,
			.nd_size_out =
				sizeof(struct nd_intel_bus_fw_activate_businfo),
			.nd_fw_size =
				sizeof(struct nd_intel_bus_fw_activate_businfo),
		},
	};
	int rc;

	rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd),
			NULL);
	*info = nd_cmd.cmd;
	return rc;
}

/* The fw_ops expect to be called with the nvdimm_bus_lock() held */
static enum nvdimm_fwa_state intel_bus_fwa_state(
		struct nvdimm_bus_descriptor *nd_desc)
{
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
	struct nd_intel_bus_fw_activate_businfo info;
	struct device *dev = acpi_desc->dev;
	enum nvdimm_fwa_state state;
	int rc;

	/*
	 * It should not be possible for platform firmware to return
	 * busy because activate is a synchronous operation. Treat it
	 * similar to invalid, i.e. always refresh / poll the status.
	 */
	switch (acpi_desc->fwa_state) {
	case NVDIMM_FWA_INVALID:
	case NVDIMM_FWA_BUSY:
		break;
	default:
		/* check if capability needs to be refreshed */
		if (acpi_desc->fwa_cap == NVDIMM_FWA_CAP_INVALID)
			break;
		return acpi_desc->fwa_state;
	}

	/* Refresh with platform firmware */
	rc = intel_bus_fwa_businfo(nd_desc, &info);
	if (rc)
		return NVDIMM_FWA_INVALID;

	switch (info.state) {
	case ND_INTEL_FWA_IDLE:
		state = NVDIMM_FWA_IDLE;
		break;
	case ND_INTEL_FWA_BUSY:
		state = NVDIMM_FWA_BUSY;
		break;
	case ND_INTEL_FWA_ARMED:
		if (info.activate_tmo > info.max_quiesce_tmo)
			state = NVDIMM_FWA_ARM_OVERFLOW;
		else
			state = NVDIMM_FWA_ARMED;
		break;
	default:
		dev_err_once(dev, "invalid firmware activate state %d\n",
				info.state);
		return NVDIMM_FWA_INVALID;
	}

	/*
	 * Capability data is available in the same payload as state. It
	 * is expected to be static.
	 */
	if (acpi_desc->fwa_cap == NVDIMM_FWA_CAP_INVALID) {
		if (info.capability & ND_INTEL_BUS_FWA_CAP_FWQUIESCE)
			acpi_desc->fwa_cap = NVDIMM_FWA_CAP_QUIESCE;
		else if (info.capability & ND_INTEL_BUS_FWA_CAP_OSQUIESCE) {
			/*
			 * Skip hibernate cycle by default if platform
			 * indicates that it does not need devices to be
			 * quiesced.
			 */
			acpi_desc->fwa_cap = NVDIMM_FWA_CAP_LIVE;
		} else
			acpi_desc->fwa_cap = NVDIMM_FWA_CAP_NONE;
	}

	acpi_desc->fwa_state = state;

	return state;
}

static enum nvdimm_fwa_capability intel_bus_fwa_capability(
		struct nvdimm_bus_descriptor *nd_desc)
{
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);

	if (acpi_desc->fwa_cap > NVDIMM_FWA_CAP_INVALID)
		return acpi_desc->fwa_cap;

	if (intel_bus_fwa_state(nd_desc) > NVDIMM_FWA_INVALID)
		return acpi_desc->fwa_cap;

	return NVDIMM_FWA_CAP_INVALID;
}

static int intel_bus_fwa_activate(struct nvdimm_bus_descriptor *nd_desc)
{
	struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
	struct {
		struct nd_cmd_pkg pkg;
		struct nd_intel_bus_fw_activate cmd;
	} nd_cmd = {
		.pkg = {
			.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE,
			.nd_family = NVDIMM_BUS_FAMILY_INTEL,
			.nd_size_in = sizeof(nd_cmd.cmd.iodev_state),
			.nd_size_out =
				sizeof(struct nd_intel_bus_fw_activate),
			.nd_fw_size =
				sizeof(struct nd_intel_bus_fw_activate),
		},
		/*
		 * Even though activate is run from a suspended context,
		 * for safety, still ask platform firmware to force
		 * quiesce devices by default. Let a module
		 * parameter override that policy.
		 */
		.cmd = {
			.iodev_state = acpi_desc->fwa_noidle
				? ND_INTEL_BUS_FWA_IODEV_OS_IDLE
				: ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE,
		},
	};
	int rc;

	switch (intel_bus_fwa_state(nd_desc)) {
	case NVDIMM_FWA_ARMED:
	case NVDIMM_FWA_ARM_OVERFLOW:
		break;
	default:
		return -ENXIO;
	}

	rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd),
			NULL);

	/*
	 * Whether the command succeeded, or failed, the agent checking
	 * for the result needs to query the DIMMs individually.
	 * Increment the activation count to invalidate all the DIMM
	 * states at once (it's otherwise not possible to take
	 * acpi_desc->init_mutex in this context)
	 */
	acpi_desc->fwa_state = NVDIMM_FWA_INVALID;
	acpi_desc->fwa_count++;

	dev_dbg(acpi_desc->dev, "result: %d\n", rc);

	return rc;
}

static const struct nvdimm_bus_fw_ops __intel_bus_fw_ops = {
	.activate_state = intel_bus_fwa_state,
	.capability = intel_bus_fwa_capability,
	.activate = intel_bus_fwa_activate,
};

const struct nvdimm_bus_fw_ops *intel_bus_fw_ops = &__intel_bus_fw_ops;

static int intel_fwa_dimminfo(struct nvdimm *nvdimm,
		struct nd_intel_fw_activate_dimminfo *info)
{
	struct {
		struct nd_cmd_pkg pkg;
		struct nd_intel_fw_activate_dimminfo cmd;
	} nd_cmd = {
		.pkg = {
			.nd_command = NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO,
			.nd_family = NVDIMM_FAMILY_INTEL,
			.nd_size_out =
				sizeof(struct nd_intel_fw_activate_dimminfo),
			.nd_fw_size =
				sizeof(struct nd_intel_fw_activate_dimminfo),
		},
	};
	int rc;

	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
	*info = nd_cmd.cmd;
	return rc;
}

static enum nvdimm_fwa_state intel_fwa_state(struct nvdimm *nvdimm)
{
	struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
	struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
	struct nd_intel_fw_activate_dimminfo info;
	int rc;

	/*
	 * Similar to the bus state, since activate is synchronous the
	 * busy state should resolve within the context of 'activate'.
	 */
	switch (nfit_mem->fwa_state) {
	case NVDIMM_FWA_INVALID:
	case NVDIMM_FWA_BUSY:
		break;
	default:
		/* If no activations occurred the old state is still valid */
		if (nfit_mem->fwa_count == acpi_desc->fwa_count)
			return nfit_mem->fwa_state;
	}

	rc = intel_fwa_dimminfo(nvdimm, &info);
	if (rc)
		return NVDIMM_FWA_INVALID;

	switch (info.state) {
	case ND_INTEL_FWA_IDLE:
		nfit_mem->fwa_state = NVDIMM_FWA_IDLE;
		break;
	case ND_INTEL_FWA_BUSY:
		nfit_mem->fwa_state = NVDIMM_FWA_BUSY;
		break;
	case ND_INTEL_FWA_ARMED:
		nfit_mem->fwa_state = NVDIMM_FWA_ARMED;
		break;
	default:
		nfit_mem->fwa_state = NVDIMM_FWA_INVALID;
		break;
	}

	switch (info.result) {
	case ND_INTEL_DIMM_FWA_NONE:
		nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NONE;
		break;
	case ND_INTEL_DIMM_FWA_SUCCESS:
		nfit_mem->fwa_result = NVDIMM_FWA_RESULT_SUCCESS;
		break;
	case ND_INTEL_DIMM_FWA_NOTSTAGED:
		nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NOTSTAGED;
		break;
	case ND_INTEL_DIMM_FWA_NEEDRESET:
		nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NEEDRESET;
		break;
	case ND_INTEL_DIMM_FWA_MEDIAFAILED:
	case ND_INTEL_DIMM_FWA_ABORT:
	case ND_INTEL_DIMM_FWA_NOTSUPP:
	case ND_INTEL_DIMM_FWA_ERROR:
	default:
		nfit_mem->fwa_result = NVDIMM_FWA_RESULT_FAIL;
		break;
	}

	nfit_mem->fwa_count = acpi_desc->fwa_count;

	return nfit_mem->fwa_state;
}

static enum nvdimm_fwa_result intel_fwa_result(struct nvdimm *nvdimm)
{
	struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
	struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;

	if (nfit_mem->fwa_count == acpi_desc->fwa_count
			&& nfit_mem->fwa_result > NVDIMM_FWA_RESULT_INVALID)
		return nfit_mem->fwa_result;

	if (intel_fwa_state(nvdimm) > NVDIMM_FWA_INVALID)
		return nfit_mem->fwa_result;

	return NVDIMM_FWA_RESULT_INVALID;
}

static int intel_fwa_arm(struct nvdimm *nvdimm, enum nvdimm_fwa_trigger arm)
{
	struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
	struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
	struct {
		struct nd_cmd_pkg pkg;
		struct nd_intel_fw_activate_arm cmd;
	} nd_cmd = {
		.pkg = {
			.nd_command = NVDIMM_INTEL_FW_ACTIVATE_ARM,
			.nd_family = NVDIMM_FAMILY_INTEL,
			.nd_size_in = sizeof(nd_cmd.cmd.activate_arm),
			.nd_size_out =
				sizeof(struct nd_intel_fw_activate_arm),
			.nd_fw_size =
				sizeof(struct nd_intel_fw_activate_arm),
		},
		.cmd = {
			.activate_arm = arm == NVDIMM_FWA_ARM
				? ND_INTEL_DIMM_FWA_ARM
				: ND_INTEL_DIMM_FWA_DISARM,
		},
	};
	int rc;

	switch (intel_fwa_state(nvdimm)) {
	case NVDIMM_FWA_INVALID:
		return -ENXIO;
	case NVDIMM_FWA_BUSY:
		return -EBUSY;
	case NVDIMM_FWA_IDLE:
		if (arm == NVDIMM_FWA_DISARM)
			return 0;
		break;
	case NVDIMM_FWA_ARMED:
		if (arm == NVDIMM_FWA_ARM)
			return 0;
		break;
	default:
		return -ENXIO;
	}

	/*
	 * Invalidate the bus-level state, now that we're committed to
	 * changing the 'arm' state.
	 */
	acpi_desc->fwa_state = NVDIMM_FWA_INVALID;
	nfit_mem->fwa_state = NVDIMM_FWA_INVALID;

	rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);

	dev_dbg(acpi_desc->dev, "%s result: %d\n", arm == NVDIMM_FWA_ARM
			? "arm" : "disarm", rc);
	return rc;
}

static const struct nvdimm_fw_ops __intel_fw_ops = {
	.activate_state = intel_fwa_state,
	.activate_result = intel_fwa_result,
	.arm = intel_fwa_arm,
};

const struct nvdimm_fw_ops *intel_fw_ops = &__intel_fw_ops;
Loading