## Abstract

Increasing deployment of advanced sensing, controls, and communication infrastructure enables buildings to provide services to the power grid, leading to the concept of grid-interactive efficient buildings. Since occupant activities and preferences primarily drive the availability and operational flexibility of building devices, there is a critical need to develop occupant-centric approaches that prioritize devices for providing grid services, while maintaining the desired end-use quality of service. In this paper, we present a decision-making framework that facilitates a building owner/operator to effectively prioritize loads for curtailment service under uncertainties, while minimizing any adverse impact on the occupants. The proposed framework uses a stochastic (Markov) model to represent the probabilistic behavior of device usage from power consumption data, and a load prioritization algorithm that dynamically ranks building loads using a stochastic multi-criteria decision-making algorithm. The proposed load prioritization framework is illustrated via numerical simulations in a residential building use-case, including plug-loads, air-conditioners, and plug-in electric vehicle chargers, in the context of load curtailment as a grid service. Suitable metrics are proposed to evaluate the closed-loop performance of the proposed prioritization algorithm under various scenarios and design choices. Scalability of the proposed algorithm is established via computational analysis, while time-series plots are used for intuitive explanation of the ranking choices.

## 1 Introduction

Buildings consume approximately 75% of US electricity and drive as much as 80% of peak power demand in some regions [1]. Although buildings are the key driver of electricity demand, they can also be a part of the solution to peak demand issues by reducing energy consumption or temporarily shifting energy usage without negatively impacting occupant comfort. Growing deployment of smart sensing, controls, and communication infrastructure has given rise to the emerging concept of grid-interactive efficient buildings (GEBs) which, in addition to thriving for energy efficiency, also take an active part in grid ancillary services or demand response (e.g., curtailment, peak reduction, regulation, etc.) [2]. Various demand response schemes exist (and are emerging) worldwide including, for example, incentives-based mechanisms, contractual agreements, time-of-use pricing, etc. [3,4], and typically involve buildings shifting or changing their energy usage pattern while still maintaining occupant comfort and safety. Identifying the latent energy flexibility in the various building loads, i.e., their ability to (temporarily) change power consumption without adversely impacting end-user comfort, is key to unlocking the GEB potential. Real-time identification and selection of building electrical devices and equipment offering energy flexibility is, however, a challenge for building operator/owner attempting to simultaneously balance the occupant needs (for comfort, business conduct, safety, security, etc.) and meeting the demand response requests and/or contracts.

Research efforts on dynamic load prioritization methods and algorithms in residential and commercial buildings are limited, with most of the existing works assuming a preassigned, static, user-specified priority order for the devices [5–7]. For example, the authors in Ref. [5] proposed a supervisory controller to select a fixed number of rooftop air-conditioning units for grid services prioritized based on their energy requests, similar to the packetized energy-based device selection proposed in Refs. [8,9]. A multi-objective stochastic optimization model was formulated in Ref. [6] to schedule flexible residential devices (e.g., air-conditioners, water-heaters, clothes dryers, and electric vehicles) with user-defined priorities for demand response participation. To account for daily demand variability, a rule-based two-level priority scheme for plug-loads was proposed in Ref. [7], one for the day-time and one for the evening, allowing fine-tuning of the priority levels based on occupancy and the sensors’ information. As opposed to rule-based/preassigned, static priority list, determining a real-time adaptive and dynamic prioritization scheme is non-trivial, since it requires taking into account several (possibly conflicting) criteria and factors, such as varying occupant needs and preferences, different end-usage of participating devices, demand response requirements, controls and communications bandwidth, uncertainties in occupant behavior and weather conditions, etc. Nevertheless, some recent works have looked into dynamic prioritization schemes. The authors in Ref. [10] applied a stochastic multi-criteria decision-making (MCDM) algorithm for prioritizing thermostatic loads (e.g., air-conditioners); however, the analysis was not extended to other device types such as batteries and plug-loads. An analytic hierarchy process (AHP)-based heuristic scheme was proposed [11] to prioritize curtailable loads (rooftop units) in small and medium commercial buildings using predefined quantitative and qualitative ranking criteria. However, the AHP scheme can produce inconsistent results if pairwise comparisons induce flawed logic inferences (e.g., the statements $a\u227bb$, $b\u227bc$, and $c\u227ba$ are logically inconsistent), which makes it prone to human error. Temperature-based priority-stack method was used in Refs. [12–14] for thermostatic loads (e.g., air-conditioners and electric water-heaters) participating in primary and secondary frequency regulation services. User-specified subjective ranking of various objectives (e.g., comfort, energy efficiency, emissions, etc.) was used in Ref. [15] to develop a model predictive control framework to schedule plug-loads, thermostatic loads, and batteries. However, the method does not generate any explicit priority list of the devices and is tied to the particular optimization-based scheduling scheme.

In this paper, we propose a decision-making framework (illustrated in Fig. 1) for real-time prioritization of a heterogeneous selection of building loads for provision of grid services based on several ranking criteria, such as end-use comfort, grid service reliability, and communication bandwidth. The proposed framework has two major components: (1) data-driven modeling: using available measurements as well as contextual information such as occupancy, weather, indoor temperature, battery state-of-charge (SoC), etc. to develop predictive models of the utilization of the candidate devices and (2) multi-criteria prioritization: the predictive usage models are used to estimate the performance scores of the devices across the multiple criteria, which are then fed into a MCDM algorithm [10,16,17] to generate the devices’ ranks. End-user feedback and sensor measurements are used to update in real-time the scoring system to better adapt to the changing end-usage pattern and grid service requests. The rest of this paper is organized as follows. A detailed description of the proposed load prioritization algorithm is provided in Sec. 2. Section 3 presents numerical simulation results demonstrating the application of the developed prioritization framework applied to a residential building scenario. We conclude the article in Sec. 4.

## 2 Dynamic Load Prioritization Framework

In this section, we describe the components of the proposed prioritization framework considering load curtailment as the grid service and plug-loads, air-conditioners (ACs), and plug-in electric vehicle (PEV) chargers as the candidate devices.

### 2.1 Ranking Criteria and Performance Scores.

The ranking criteria are used to capture the qualitative objectives pursued by various stakeholders (e.g., the occupants, the building operator, the building owner, etc.), related to the quality of end-use and delivered grid service, and controls or communications bandwidth. In this paper, we have selected three such criteria for ranking (motivated by the work in Ref. [10]): (1) “*comfort*”: quality of end-user experience (e.g., room temperature) delivered by the operation of a device; (2) “*reliability*”: successful (accurate and timely) response of a device to a control command for grid service; and (3) “*bandwidth*”: the number of devices needed to be engaged for the requested grid service.

Scores are assigned to each device based on their performance evaluation with regard to each of the above-ranking criteria, using the methodology described below. For example, for residential ACs, “comfort” is related to the perceived thermal comfort of the occupants. Thermal comfort can be measured as a function of the difference between the indoor temperature and the set-point temperature or could be estimated based on (historical) feedback from the occupants. For uniformity, the scores are normalized between 0 and 1, with higher scores reflecting the better performance of a device for a certain criterion. Focusing on load curtailment as the grid service of interest, we explain the scoring methodology for the three different criteria.

*Comfort Scores, $X1n$:* The comfort score of device *n* refers to its availability to change power consumption without any perceived impact on end-user convenience, i.e., higher values of $X1n$ represent better comfort. We explain the adopted comfort scoring methodology for different types of loads as follows:

*AC*: Thermal comfort associated with the operation of an AC is a function of the difference between the actual (*T*^{n}(*h*), with*h*denoting the hour) and the desired ($Tsn$) indoor temperatures as follows:$\alpha 1n$ is a positive scalar, and(1)$(AC)X1n=min(1,X1n+\u22c5X1n\u2212)whereX1n+=1+exp(\u2212\alpha 1n)1+exp(\u2212\alpha 1n((Tn[h]\u2212Tsn/\delta Tn+)+1))X1n\u2212=1+exp(\u2212\alpha 1n)1+exp(\alpha 1n((Tn[h]\u2212Tsn/\delta Tn\u2212)\u22121))$*δT*^{n+}and*δT*^{n−}are used to model the (possibly asymmetric) degradation of perceived comfort at higher and lower temperatures relative to the set-point. $X1n+$ and $X1n\u2212$ are meant to (loosely) capture the discomfort at temperatures lower and higher than the set-point, respectively.Figure 2(a) illustrates the comfort scores at different room temperatures, with the following parameters:This comfort model is in alignment with the findings of an end-user survey in Ref. [15]. Specifically, the comfort attains the maximum value of 1 at the set-point temperature and tapers off (asymmetrically) on both sides of the desired temperature. The parameters in (2) can be adjusted from historical data to fine-tune the comfort curve to occupants’ perceptions.(2)$(\alpha 1n,Tsn,\delta Tn+,\delta Tn\u2212)=(2.3,72\u2218F,2\u2218F,3\u2218F)$*Plug-loads*: The calculation of comfort score for plug-loads is fundamentally different from those for ACs. Typically, the utility of the plug-loads is directly related to its power consumption, as opposed to energy consumption (e.g., room temperature depends on how long the AC has been “on”). In this work, we propose to quantify the comfort in plug-load usage at a given time as a measure of its closeness to the nominal (or expected) usage pattern learned from historical data or otherwise. For example, living room lights should be “on” in the evening, while bedroom lights “off” at night, or the TV should be “on” when a particular show is on the air, etc. Let us denote by the binary variable $un[h]\u2208{0,1}$ the actual usage of the plug-load*n*(1: being used, 0: not being used), at time*h*, while $u\xafn[h]\u2208[0,1]$ denotes the probability of usage. The comfort score is then modeled as follows:and $\alpha 1n$ is a positive scalar which determines the sensitivity of the comfort scores to the usage probability values. Figure 2(b) shows the resulting comfort score for a plug-load under two scenarios,(3)$(plug\u2212load)X1n=1\u2212e\alpha 1n(\Delta un[h]\u22121)(1\u2212e\u2212\alpha 1n/2)(1+e\alpha 1n(\Delta un[h]\u22121/2))where\Delta un[h]=|un[h]\u2212u\xafn[h]|$*u*^{n}= 0 (not in use) and*u*^{n}= 1 (in use), as a function of the probability of usage, with $\alpha 1n=10$. Note, however, that even though the proposed comfort model (3) is*symmetric*(i.e., Δ*u*^{n}[*h*] takes absolute values), non-symmetric comfort scores can be accommodated as easily into the prioritization framework.*Plug-in electric vehicle*: A PEV charging load is only available during the charging hours, with the charging task remaining at time*h*being typically specified as the delivery of a minimum required SoC ($\Delta Ehn$) within a maximum allowable duration (Δ*h*^{n}> 0). The comfort score for a PEV charging load is then measured as a function of the difference between the current charging rate of the PEV (*P*^{n}[*h*]) and the required mean charging rate ($=\Delta Ehn/\Delta hn$) to meet the charging goal:$\alpha 1n$ and(4)$(PEV)X1n=min(1,1\u2212e\alpha 1n(\beta n[h]\u22121)(1\u2212e\u2212\alpha 1n/2)(1+e\alpha 1n(\beta n[h]\u22121/2)))where\beta n[h]=max(0,\Delta EhnPrn\Delta hn\u2212(1\u2212\epsilon n)Pn[h]Prn)$*ɛ*^{n}< 1 are positive scalars, and $Prn$ is the maximum charging rate (or the rated power). The scalar $\alpha 1n$ determines the sensitivity of the comfort scores to required mean charge rates, while*ɛ*^{n}represents some sort of safety margin for the charging goal. Figure 2(c) shows the comfort score for a PEV charger under two scenarios: charging ($Pn=Prn$) and not charging (*P*^{n}= 0), as a function of the mean charging rate required (relative to rated), with $\alpha 1n=10$ and*ɛ*^{n}= 0.4. A charging task is “feasible” if $\Delta Ehn<Prn\Delta hn$.

*Reliability Scores, $X2n$:*In the context of load curtailment grid service, reliability refers to the candidate device being in the correct operational state (i.e., in the powered “on” state) to be able to successfully execute a curtail request (by switching “off” or reducing power). As such, the reliability scores for devices signed up for a curtailment program are given by

*u*

^{n}= 1) or not (

*u*

^{n}= 0) at time

*h*. For simplicity, we assume that each device has only two admissible power consumption states, either drawing its rated power (denoted by

*u*

^{n}= 1) or not drawing any power at all (i.e.,

*u*

^{n}= 0).

*Bandwidth Scores, $X3n$:*In the context of load curtailment, the bandwidth score relates to the amount of load curtailed via issuing a certain number of control commands to devices. For example, consider a communication bandwidth restricted scenario where the building operator can only issue five control commands every minute. In such a scenario, turning off five 5 kW devices achieves greater load curtailment, as opposed to turning off five 1.5 kW devices. In other words, devices with higher rated power have more favorable bandwidth scores. Let us denote by $Prn$ the power rating of the

*n*th device. Then the bandwidth score for the

*n*th device is defined by

### 2.2 Data-Driven Predictive Modeling.

It is necessary for the multi-criteria decision algorithm, described later in this article, to have access to probabilistic models that can forecast the impact of certain selection decisions on the performance of the devices across criteria. Although device level metering and occupancy measurement are not ubiquitous currently, especially in residential buildings, the sub-metering trends in the buildings industry [18] project a future where those will be available. We rely on simulation models (described in Sec. 3) to generate the synthetic time-series data for device-specific power consumption measurements and other relevant contextual information such as occupancy, indoor temperature, battery SoC, etc. Using the scoring methodology described earlier, time-series sequences of the performance scores are also generated. Collection of such “historical” data-sets, including time-series sequences of power measurements, occupancy, performance scores, and other contextual information, are used to develop stochastic (Markov) models for prediction of device behavior as a consequence of selection choices.

In particular, we propose *hidden Markov models* (HMMs) to better capture the effect of various contextual but (often) unobservable factors, such as end-user activities, on the device utilization and performance. We define an “activity scenario” as an observed state of operation of the set of devices, in which each device is drawing a certain power. For example, consider two devices A and B, each having two possible states of power consumption, 1 (i.e., drawing power) and 0 (i.e., not drawing any power). The corresponding set of activity scenarios would be (0,0), (0,1), (1,0), (1,1). We recognize that in reality the power consumed by devices does not have deterministic values and can vary over a spectrum depending on time or type of use. Therefore, with realistic data, activity scenarios can be defined based on a clustering algorithm performed over the power consumption data for all devices. In this paper, however, we assume that the devices can only have binary operating states ($un\u2208{0,1}$). The “activity scenarios” are states to an HMM in which the operation of each device is assumed to follow a Markov process with unobservable (i.e., hidden) states. The objective of solving the HMM is to learn the probabilistic behavior of the transitions between activity scenarios in each building using the streaming power consumption data. At the start of each sampling period, a maximum-likelihood estimation method such as the Viterbi [19,20] or the Baum–Welch algorithm [21] is used to estimate the transition probabilities between the activity scenarios. The time-series data with continuous values (e.g., room temperature, battery SoC, performance scores) are discretized appropriately (e.g., comfort scores discretized to 10 values within 0 and 1) to construct the states of the Markov model. Once identified, the HMMs are used to predict the device operation, including the forecast of the performance scores as a consequence of the selection choices (e.g., committing an AC to load curtailment). These predictive models of performance scores are used in the prioritization algorithm, as described next.

### 2.3 Prioritization Algorithm.

We apply a stochastic MCDM algorithm used in Refs. [16,17] for prioritizing alternatives under uncertainties. The key idea behind stochastic ranking is the use of probability theory to rank alternatives based on their predicted scores using a finite number of measurable and independent ranking criteria, where the scores evolve according to some stochastic process. Motivated by the work in Ref. [10], we extend the application of stochastic MCDM algorithm for prioritizing a wide variety of end-use electrical loads (including plug-loads, PEVs, as well as thermostatic loads) which are ranked using the following *criteria*: (i) comfort, (ii) reliability, and (iii) bandwidth. Suppose there are *N* devices in a building, $N:={1,\u2026,N}$ be the index set for devices, and $n\u2208N$ denotes the *n* th device. Similarly, let *A* be the number of device criteria (*A* = 3 in this work), $A:={1,\u2026,A}$ be the index set for criteria, and $a\u2208A$ be the *a* th criterion. Let $Xan$ denote the random value representing the performance score of candidate device $n\u2208N$ evaluated against criterion $a\u2208A$, and let $Xn:=(Xan:a\u2208A)$ be the vector of scores for device *n* against all criteria. It is assumed that $Xan$ and $Xbn$ are mutually independent for all $a,b\u2208A$, and $Xan$ is finite and discrete for all $a\u2208A,n\u2208N$. Denote $\pi an(\u22c5)$ to be the probability mass function (p.m.f.) of $Xan$. Let $Ean>m$ be the event that device *n* outscores device *m* based on the criterion *a*. The event $Ean>m$ is stated to take place

*with probability 1*, if $Xan>Xam$ occurs;*with probability 0.5*, if $Xan=Xam$ occurs; and*with probability 0*, otherwise.

*Step 1*(pairwise comparison using single criterion): In this step, an

*N*×

*N*comparison matrix

*C*

_{a}is constructed for each criterion $a\u2208A$, where each off-diagonal entry represents the probability that one alternative is adjudged superior over another with respect to the given criterion. The entry corresponding to row

*n*and column

*m*of

*C*

_{a}is defined as

*C*

_{a}can be disregarded as comparing a device to itself has no physical interpretation.

*Step 2*(pairwise comparison using vector of criteria): Using the probabilities from Step 1, the overall probabilities comparing devices are evaluated using their vector of scores over all criteria. Suppose

*X*

^{n}and

*X*

^{m}be the (random) score vectors of devices

*n*and

*m*, respectively. To analyze overall preference, a value in the score vector of

*n*is compared to the corresponding value in the score vector of

*m*. For each criterion $a\u2208A$ and a pair of devices (

*n*,

*m*), define the corresponding indicator variable with respect to the event $Ean>m$:

*G*

^{n,m}provides an overall summary of relative superiority of the performance of device

*n*over device

*m*across all criteria. For example, a sample realization of (0, 0, 1) of

*G*

^{n,m}indicates that device

*n*is adjudged superior to device

*m*for criteria 1 and 2 but is inferior for criterion 3. Clearly, the number of possible realizations of

*G*

^{n,m}is 2

^{A}. The probability of occurrence of the

*h*th realization is denoted by

*h*denotes specific sample realization.

*Step 3*(classification of possible outcomes): For every ordered pair (

*n*,

*m*), the possible outcomes of score vector comparisons from Step 2 are grouped into three mutually exclusive sets, namely,

*Most Preferable*($S1n,m$),

*Indifferent*($S2n,m)$, and

*Not Preferable*($S3n,m$), using a linear classification rule involving criteria weights imposed by a decision-maker a priori (see Ref. [16]). Let us define a (normalized) criteria weights vector as

*ν*∈ (0.5, 1). Each sample realization of

*G*

^{n,m}is then classified as follows:

*n*,

*m*), the probability that an outcome belongs to class $Sin,m$,

*i*= 1, 2, 3, is given by

*Step 4*(ranking using the decision probabilities): Based on the rule mentioned in Ref. [16], the relative superiority of device

*n*over device

*m*(across all criteria) is evaluated as

*r*

^{n>m}are collected into the composite comparison matrix

*R*= [

*r*

^{n>m}], with the diagonal entries set to 1 (and can be disregarded). Recall that the matrices

*C*

_{a}denote the pairwise comparison of superiority of the devices against a single criterion $a\u2208A$, while

*R*denotes the pairwise comparison across all the criteria. Finally, a fitness vector

*F*= [

*f*

^{n}] is constructed with each of its elements, henceforth referred to as the

*fitness values*, denoting the mean of the off-diagonal entries in the corresponding row in

*R*, i.e., $fn=\u2211m\u2260nrn>m/(N\u22121)$. The candidate devices are ranked in the descending order of the elements of the fitness vector

*F*. Note that the fitness value (

*f*

^{n}) is an upper bound on the probability of device

*n*being the most superior,

^{2}and therefore gives an optimistic estimate on the likelihood of superiority of that device.

*Computational Complexity:* The stochastic MCDM algorithm involves algebraic calculations of the pairwise comparison probabilities, for all possible realizations of *G*^{n,m} (for any pair *n*, *m*). The number of possible realizations of *G*^{n,m}, however, grows exponentially with the number of criteria. As long as a reasonably small number of criteria are used, which are both relevant and applicable to multiple types of devices (plug-loads, AC, PEV, etc.), the algorithm remains computationally tractable. In particular, as we will see in Sec. 3, the MCDM algorithm has a polynomial (quadratic) complexity, *O*[*N*^{2}], while the brute-force enumeration offers an exponential complexity, *O*[10^{N}].

## 3 Numerical Results

To demonstrate the proposed load prioritization approach, we consider a single-family residential home consisting of a variety of end-use devices regarded as candidates for participating in a curtailment program: a PEV, a residential AC, lights, and other plug-loads, short-listed based on their annual impact to the US annual energy consumption as reported in the study [22]. Table 1 presents a list of such devices, along with their location within the house and their typical power ratings. In this work, the devices are assumed to draw power only at their rated values. Nevertheless, an extension to devices with variable power consumption is possible with a corresponding extension to the activity scenarios. Synthetic time-series data of the devices’ power measurements, performance scores, and other contextual information are generated using the following simulation models.

### 3.1 Synthetic Data Generation.

We used the following models to generate the synthetic data for illustration of the proposed ranking methodology.

*Residential Air-Conditioner:* The time-series of the on/off state ($un\u2208{0,1}$) of the residential AC was generated as follows. A discrete three-parameter Weibull distribution proposed in Ref. [26] was used to model the probability of the AC being running (controlled via manual wall switch) at any given time, given the (estimated) occupancy at that moment. A one-mass equivalent thermal parameter model was used to represent the thermal dynamics of the indoor temperature [12]. Note that this simpler model was used only to generate sufficient data to drive the modeling effort, but the rest of the proposed prioritization framework is independent of this modeling assumption. The actual operational state, *u*^{n}, of the AC is determined by two factors: (1) whether or not the AC is running, determined by the (expected) occupancy in a probabilistic sense and (2) when running, the on/off status of the thermostat determined by the local controller which maintains the indoor temperature within acceptable range of the temperature set-point, given varying outside air temperature. The TMY3 dataset [27] was used to generate the outdoor air temperature profiles considering the summer months in Phoenix, AZ. The corresponding time-series of the performance scores is computed using the methodology described in Sec. 2.1.

*Plug-Loads:* Each plug-load device *n* under consideration is intuitively assigned a probability of usage *P*(*u*^{n} = 1|*h*) based on hour-of-day *h*, for a work-day scenario. The occupancy patterns from a single-family home reported in Ref. [28] form the basis for the profiles shown in Fig. 3. For example, home refrigerators are usually never turned off and have a high probability of usage throughout the day, while living room lights are typically “on” in the evening.

To generate a wide variety of activity scenarios, multiple test runs are carried out using a random generator to assign the rated power to the time-series consumption data, $Pn[h]=Prn$, with a probability *P*(*u*^{n} = 1|*h*). Time-series of the performance scores are computed from the usage probabilities and the power consumption data, as described in Sec. 2.1.

*Plug-in Electric Vehicle:* PEVs are assumed to be in use during the day-time and plugged-in for charging overnight. The vehicle is assumed to be plugged into a wall-charger at the residential building at around 6 p.m., with the battery SoC between 15% and 25%. It is assumed that the vehicle is expected to be charged to at least 85% of its battery capacity, by 8 a.m. in the morning. The charging process is assumed to follow a simple linear model, as described in Ref. [29]. While the charging powers and battery sizes vary quite a lot [25], we assume those values to be 5 kW and 40 kW h, respectively, for data generation in this paper. The PEV is modeled to draw power close to its rated power (with small variations of less than $\xb15%$ driven by fluctuations in grid voltage) whenever plugged-in, unless the battery is fully charged. Time-series of the performance scores are calculated using the methodology in Sec. 2.1.

### 3.2 Multi-Criteria Prioritization.

*Illustration of the Ranking Algorithm*: To illustrate the working on the ranking algorithm, we present time-series data from closed-loop simulations from 6 p.m. to 8 a.m. (next day) on a typical work-day. Ranks are updated every 15 min, while measurements (and scores) are available at every 5 min intervals. Two different choices of criteria weights are considered:

Case 1:

*W*= (0.5, 0.3, 0.2) (comfort, reliability, bandwidth);Case 2:

*W*= (0.3, 0.5, 0.2);

and the two top-most ranked devices are picked for curtailment at every ranking update. Simulation results from case 1 are shown in Figs. 4–10, with case 2 results in Figs. 11 and 12.

We first discuss case 1 (Figs. 4–10). Clearly, refrigerators are always ranked low due to the expected adverse impact of their curtailment on the “comfort scores,” as reflected in their very high usage probability throughout the day (Fig. 5) to help keep the freshness and health of the food inside. ACs are lower ranked early in the evening (around 6–9 p.m.) due to their need to stay ON to bring the warm indoor temperature close to the set-point, after which the AC is typically ranked high alternatively (occupying ranks 1–3) until 8 a.m. It is to be noted here that it is expected that the AC is OFF during the day when the house is typically unoccupied, leading to increase in the indoor temperature prior to 6 p.m. Figure 6 shows the evolution of outdoor and indoor temperatures, as well as the comfort scores and the utilization of the AC. The PEV, on the other hand, is ranked high in the evening (until after midnight) until when the time left to reach the target SoC becomes low enough such that the PEV has to start charging (Fig. 7) and be unavailable for curtailment, reflecting in lower ranks. The lights and the ceiling fan present a couple of interesting insights into the algorithm. Lights are lower ranked early in the evening, because of their high usage probability at that time (Fig. 8), while moving up the ranking order afterwards. Note that, during the overnight hours the lights are ranked between 1 and 2, even though they are usually turned OFF, leading to low reliability scores—reflecting a lack of sufficient number of devices during overnight hours to provide curtailment. On a similar note, ceiling fans are ranked 2 early in the evening (higher than AC, refrigerator, and lights), even though fans are rarely used during those hours, leading to poor reliability scores (but no adverse impact on comfort). Fans continue to occupy top ranks, especially during overnight hours when they have high usage probability. The lack of clear choices in devices that can provide curtailment service during the overnight hours is also reflected in the corresponding *fitness values* during those hours (Fig. 10). High fitness values corresponding to a rank show that the device occupying that rank is indeed a suitable candidate, while a large separation of the fitness value of any ranked device from that of the next ranked device signifies the clarity of the particular ranking assignment. For example, from Fig. 10, it is clear that the choice for rank 1 (which is the PEV) is quite clear and distinct in the early evening hours, while the choices for ranks 2 and 3 are not so clear. During overnight hours, on the other hand, there does not seem to be clear choices for ranks 1–3, which is also reflected in the fluctuating ranking positions (Fig. 4).

*Performance Analysis*: Next we present a couple of results analyzing the performance of the proposed ranking algorithm with respect to the evaluation metrics (11), as well as its computational efficiency. All the results are from 90 work-days simulations (summer months). In Fig. 13, we show how the ranking performance varies (with respect to the metrics *s*_{1} and *s*_{2}) as we change the criteria weights, with two different choices for the number of top-ranked devices picked for curtailment—1 and 2. As expected, increasing the reliability weights increases the mean curtailment value, and *vice versa*, while selecting two devices results in lower comfort but higher curtailed power, for the same choice of criteria weights. Next we fix the criteria weights to *W* = (0, 5, 0.3, 0.2) and evaluate the performance of the stochastic MCDM algorithm when we vary: (a) the number of top-ranked devices picked for curtailment from 1 to 3 and (b) the ranking update frequency from once in every 5 min to once in every 30 min (the ranks are held fixed between two consecutive updates). The results are summarized in Table 2. The comfort values decrease, with an increase in the mean curtailment values, as we commit higher number of devices for curtailment, while the ranking update frequency does not seem to impact the performance. MCDM algorithm out-performs a *randomized* ranking method (i.e., devices are assigned a rank arbitrarily) on both the metrics, with a “baseline” scenario showing the (best-case) mean comfort value expected when no device participates in curtailment services. Finally, we compare the proposed algorithm with the computational complexity of the explicit (brute-force) enumeration. A comparison between the two algorithms is illustrated in Fig. 14. The MCDM algorithm attains polynomial complexity with respect to the number of candidate devices, *O*[*N*^{2}], taking about 30 ms to compute ranks of 100+ devices (computational time statistics obtained from 100 samples for each *N*). On the other hand, the brute-force method scales exponentially, *O*[10^{N}], taking about 15 s when *N* = 7, making it inapplicable to realistically sized problems.

No. of selections | Rank update (min) | Ranking performance evaluation (s_{1}, s_{2}) | ||
---|---|---|---|---|

Random | MCDM | Baseline | ||

1 | 5 | (0.80, 0.69 kW) | (0.80, 1.93 kW) | (0.89, 0 kW) |

15 | (0.80, 0.66 kW) | (0.81, 1.47 kW) | ||

30 | (0.80, 0.68 kW) | (0.81, 1.38 kW) | ||

2 | 5 | (0.70, 1.67 kW) | (0.75, 2.40 kW) | |

15 | (0.69, 1.59 kW) | (0.77, 1.93 kW) | ||

30 | (0.69, 1.74 kW) | (0.75, 2.13 kW) | ||

3 | 5 | (0.57, 3.00 kW) | (0.58, 3.82 kW) | |

15 | (0.55, 2.98 kW) | (0.59, 3.86 kW) | ||

30 | (0.56, 2.90 kW) | (0.59, 3.83 kW) |

No. of selections | Rank update (min) | Ranking performance evaluation (s_{1}, s_{2}) | ||
---|---|---|---|---|

Random | MCDM | Baseline | ||

1 | 5 | (0.80, 0.69 kW) | (0.80, 1.93 kW) | (0.89, 0 kW) |

15 | (0.80, 0.66 kW) | (0.81, 1.47 kW) | ||

30 | (0.80, 0.68 kW) | (0.81, 1.38 kW) | ||

2 | 5 | (0.70, 1.67 kW) | (0.75, 2.40 kW) | |

15 | (0.69, 1.59 kW) | (0.77, 1.93 kW) | ||

30 | (0.69, 1.74 kW) | (0.75, 2.13 kW) | ||

3 | 5 | (0.57, 3.00 kW) | (0.58, 3.82 kW) | |

15 | (0.55, 2.98 kW) | (0.59, 3.86 kW) | ||

30 | (0.56, 2.90 kW) | (0.59, 3.83 kW) |

## 4 Conclusions

In this paper, we presented a framework that enables a building owner/operator to effectively value and prioritize loads under temporal uncertainty in building occupancy and other exogenous variables. The proposed load prioritization framework comprises of a modeling step that uses a stochastic (Markov) model to learn the probabilistic behavior of device usage based on power consumption data and a load prioritization step that dynamically ranks building loads using a stochastic MCDM framework. A simulation case study for a residential building scenario was used to demonstrate the proposed prioritization approach for curtailment service. Time-series plots were used to intuitively explain the rationale behind the ranking outcomes, while the impact of various choices (such as selection criteria weights, number of devices to commit, and ranking update frequency) on ranking performance was investigated. Future research efforts would explore the applicability of the proposed framework to commercial buildings and connected communities, and generalize it to grid services beyond curtailment.

## Footnote

Consider any *K* random variables ${Xi}i=1K$. For any pair of *k* th and *l* th random variables, we have $P(Xk>maxi\u2260kXi)\u2264P(Xk>Xl)$, from definition. It therefore follows that $P(Xk>maxi\u2260kXi)\u2264\u2211i\u2260kP(Xk>Xi)/(K\u22121)$.

## Acknowledgment

The work was supported by the Building Technologies Office of the U.S. Department of Energy under Contract No. DE-AC05-76RL01830.

## Conflict of Interest

There are no conflicts of interest.