Abstract

Increasing deployment of advanced sensing, controls, and communication infrastructure enables buildings to provide services to the power grid, leading to the concept of grid-interactive efficient buildings. Since occupant activities and preferences primarily drive the availability and operational flexibility of building devices, there is a critical need to develop occupant-centric approaches that prioritize devices for providing grid services, while maintaining the desired end-use quality of service. In this paper, we present a decision-making framework that facilitates a building owner/operator to effectively prioritize loads for curtailment service under uncertainties, while minimizing any adverse impact on the occupants. The proposed framework uses a stochastic (Markov) model to represent the probabilistic behavior of device usage from power consumption data, and a load prioritization algorithm that dynamically ranks building loads using a stochastic multi-criteria decision-making algorithm. The proposed load prioritization framework is illustrated via numerical simulations in a residential building use-case, including plug-loads, air-conditioners, and plug-in electric vehicle chargers, in the context of load curtailment as a grid service. Suitable metrics are proposed to evaluate the closed-loop performance of the proposed prioritization algorithm under various scenarios and design choices. Scalability of the proposed algorithm is established via computational analysis, while time-series plots are used for intuitive explanation of the ranking choices.

1 Introduction

Buildings consume approximately 75% of US electricity and drive as much as 80% of peak power demand in some regions [1]. Although buildings are the key driver of electricity demand, they can also be a part of the solution to peak demand issues by reducing energy consumption or temporarily shifting energy usage without negatively impacting occupant comfort. Growing deployment of smart sensing, controls, and communication infrastructure has given rise to the emerging concept of grid-interactive efficient buildings (GEBs) which, in addition to thriving for energy efficiency, also take an active part in grid ancillary services or demand response (e.g., curtailment, peak reduction, regulation, etc.) [2]. Various demand response schemes exist (and are emerging) worldwide including, for example, incentives-based mechanisms, contractual agreements, time-of-use pricing, etc. [3,4], and typically involve buildings shifting or changing their energy usage pattern while still maintaining occupant comfort and safety. Identifying the latent energy flexibility in the various building loads, i.e., their ability to (temporarily) change power consumption without adversely impacting end-user comfort, is key to unlocking the GEB potential. Real-time identification and selection of building electrical devices and equipment offering energy flexibility is, however, a challenge for building operator/owner attempting to simultaneously balance the occupant needs (for comfort, business conduct, safety, security, etc.) and meeting the demand response requests and/or contracts.

Research efforts on dynamic load prioritization methods and algorithms in residential and commercial buildings are limited, with most of the existing works assuming a preassigned, static, user-specified priority order for the devices [57]. For example, the authors in Ref. [5] proposed a supervisory controller to select a fixed number of rooftop air-conditioning units for grid services prioritized based on their energy requests, similar to the packetized energy-based device selection proposed in Refs. [8,9]. A multi-objective stochastic optimization model was formulated in Ref. [6] to schedule flexible residential devices (e.g., air-conditioners, water-heaters, clothes dryers, and electric vehicles) with user-defined priorities for demand response participation. To account for daily demand variability, a rule-based two-level priority scheme for plug-loads was proposed in Ref. [7], one for the day-time and one for the evening, allowing fine-tuning of the priority levels based on occupancy and the sensors’ information. As opposed to rule-based/preassigned, static priority list, determining a real-time adaptive and dynamic prioritization scheme is non-trivial, since it requires taking into account several (possibly conflicting) criteria and factors, such as varying occupant needs and preferences, different end-usage of participating devices, demand response requirements, controls and communications bandwidth, uncertainties in occupant behavior and weather conditions, etc. Nevertheless, some recent works have looked into dynamic prioritization schemes. The authors in Ref. [10] applied a stochastic multi-criteria decision-making (MCDM) algorithm for prioritizing thermostatic loads (e.g., air-conditioners); however, the analysis was not extended to other device types such as batteries and plug-loads. An analytic hierarchy process (AHP)-based heuristic scheme was proposed [11] to prioritize curtailable loads (rooftop units) in small and medium commercial buildings using predefined quantitative and qualitative ranking criteria. However, the AHP scheme can produce inconsistent results if pairwise comparisons induce flawed logic inferences (e.g., the statements ab, bc, and ca are logically inconsistent), which makes it prone to human error. Temperature-based priority-stack method was used in Refs. [1214] for thermostatic loads (e.g., air-conditioners and electric water-heaters) participating in primary and secondary frequency regulation services. User-specified subjective ranking of various objectives (e.g., comfort, energy efficiency, emissions, etc.) was used in Ref. [15] to develop a model predictive control framework to schedule plug-loads, thermostatic loads, and batteries. However, the method does not generate any explicit priority list of the devices and is tied to the particular optimization-based scheduling scheme.

In this paper, we propose a decision-making framework (illustrated in Fig. 1) for real-time prioritization of a heterogeneous selection of building loads for provision of grid services based on several ranking criteria, such as end-use comfort, grid service reliability, and communication bandwidth. The proposed framework has two major components: (1) data-driven modeling: using available measurements as well as contextual information such as occupancy, weather, indoor temperature, battery state-of-charge (SoC), etc. to develop predictive models of the utilization of the candidate devices and (2) multi-criteria prioritization: the predictive usage models are used to estimate the performance scores of the devices across the multiple criteria, which are then fed into a MCDM algorithm [10,16,17] to generate the devices’ ranks. End-user feedback and sensor measurements are used to update in real-time the scoring system to better adapt to the changing end-usage pattern and grid service requests. The rest of this paper is organized as follows. A detailed description of the proposed load prioritization algorithm is provided in Sec. 2. Section 3 presents numerical simulation results demonstrating the application of the developed prioritization framework applied to a residential building scenario. We conclude the article in Sec. 4.

Fig. 1
Illustration of the proposed load prioritization framework
Fig. 1
Illustration of the proposed load prioritization framework
Close modal

2 Dynamic Load Prioritization Framework

In this section, we describe the components of the proposed prioritization framework considering load curtailment as the grid service and plug-loads, air-conditioners (ACs), and plug-in electric vehicle (PEV) chargers as the candidate devices.

2.1 Ranking Criteria and Performance Scores.

The ranking criteria are used to capture the qualitative objectives pursued by various stakeholders (e.g., the occupants, the building operator, the building owner, etc.), related to the quality of end-use and delivered grid service, and controls or communications bandwidth. In this paper, we have selected three such criteria for ranking (motivated by the work in Ref. [10]): (1) “comfort”: quality of end-user experience (e.g., room temperature) delivered by the operation of a device; (2) “reliability”: successful (accurate and timely) response of a device to a control command for grid service; and (3) “bandwidth”: the number of devices needed to be engaged for the requested grid service.

Scores are assigned to each device based on their performance evaluation with regard to each of the above-ranking criteria, using the methodology described below. For example, for residential ACs, “comfort” is related to the perceived thermal comfort of the occupants. Thermal comfort can be measured as a function of the difference between the indoor temperature and the set-point temperature or could be estimated based on (historical) feedback from the occupants. For uniformity, the scores are normalized between 0 and 1, with higher scores reflecting the better performance of a device for a certain criterion. Focusing on load curtailment as the grid service of interest, we explain the scoring methodology for the three different criteria.

Comfort Scores, X1n: The comfort score of device n refers to its availability to change power consumption without any perceived impact on end-user convenience, i.e., higher values of X1n represent better comfort. We explain the adopted comfort scoring methodology for different types of loads as follows:

  • AC: Thermal comfort associated with the operation of an AC is a function of the difference between the actual (Tn (h), with h denoting the hour) and the desired (Tsn) indoor temperatures as follows:
    (1)
    α1n is a positive scalar, and δTn+ and δTn are used to model the (possibly asymmetric) degradation of perceived comfort at higher and lower temperatures relative to the set-point. X1n+ and X1n are meant to (loosely) capture the discomfort at temperatures lower and higher than the set-point, respectively.
    Figure 2(a) illustrates the comfort scores at different room temperatures, with the following parameters:
    (2)
    This comfort model is in alignment with the findings of an end-user survey in Ref. [15]. Specifically, the comfort attains the maximum value of 1 at the set-point temperature and tapers off (asymmetrically) on both sides of the desired temperature. The parameters in (2) can be adjusted from historical data to fine-tune the comfort curve to occupants’ perceptions.
  • Plug-loads: The calculation of comfort score for plug-loads is fundamentally different from those for ACs. Typically, the utility of the plug-loads is directly related to its power consumption, as opposed to energy consumption (e.g., room temperature depends on how long the AC has been “on”). In this work, we propose to quantify the comfort in plug-load usage at a given time as a measure of its closeness to the nominal (or expected) usage pattern learned from historical data or otherwise. For example, living room lights should be “on” in the evening, while bedroom lights “off” at night, or the TV should be “on” when a particular show is on the air, etc. Let us denote by the binary variable un[h]{0,1} the actual usage of the plug-load n (1: being used, 0: not being used), at time h, while u¯n[h][0,1] denotes the probability of usage. The comfort score is then modeled as follows:
    (3)
    and α1n is a positive scalar which determines the sensitivity of the comfort scores to the usage probability values. Figure 2(b) shows the resulting comfort score for a plug-load under two scenarios, un = 0 (not in use) and un = 1 (in use), as a function of the probability of usage, with α1n=10. Note, however, that even though the proposed comfort model (3) is symmetric (i.e., Δun[h] takes absolute values), non-symmetric comfort scores can be accommodated as easily into the prioritization framework.
  • Plug-in electric vehicle: A PEV charging load is only available during the charging hours, with the charging task remaining at time h being typically specified as the delivery of a minimum required SoC (ΔEhn) within a maximum allowable duration (Δhn > 0). The comfort score for a PEV charging load is then measured as a function of the difference between the current charging rate of the PEV (Pn[h]) and the required mean charging rate (=ΔEhn/Δhn) to meet the charging goal:
    (4)
    α1n and ɛn < 1 are positive scalars, and Prn is the maximum charging rate (or the rated power). The scalar α1n determines the sensitivity of the comfort scores to required mean charge rates, while ɛn represents some sort of safety margin for the charging goal. Figure 2(c) shows the comfort score for a PEV charger under two scenarios: charging (Pn=Prn) and not charging (Pn = 0), as a function of the mean charging rate required (relative to rated), with α1n=10 and ɛn = 0.4. A charging task is “feasible” if ΔEhn<PrnΔhn.

Fig. 2
Illustration of the perceived comfort scores for three different types of loads: (a) (ACs) comfort as a function of the room temperature, given by (1), with the parameters in (2); (b) (plug-loads) comfort as a function of the nominal (expected) probability of usage, given by (3), with α1n=10; and (c) (PEVs) comfort as a function of the required mean charging rate, given by (4), with α1n=10 and ɛn = 0.4. (a) ACs, (b) plug-loads, and (c) PEVs.
Fig. 2
Illustration of the perceived comfort scores for three different types of loads: (a) (ACs) comfort as a function of the room temperature, given by (1), with the parameters in (2); (b) (plug-loads) comfort as a function of the nominal (expected) probability of usage, given by (3), with α1n=10; and (c) (PEVs) comfort as a function of the required mean charging rate, given by (4), with α1n=10 and ɛn = 0.4. (a) ACs, (b) plug-loads, and (c) PEVs.
Close modal
Reliability Scores, X2n: In the context of load curtailment grid service, reliability refers to the candidate device being in the correct operational state (i.e., in the powered “on” state) to be able to successfully execute a curtail request (by switching “off” or reducing power). As such, the reliability scores for devices signed up for a curtailment program are given by
(5)
where un[h]{0,1} is a binary variable denoting whether the device is drawing power (un = 1) or not (un = 0) at time h. For simplicity, we assume that each device has only two admissible power consumption states, either drawing its rated power (denoted by un = 1) or not drawing any power at all (i.e., un = 0).
Bandwidth Scores, X3n: In the context of load curtailment, the bandwidth score relates to the amount of load curtailed via issuing a certain number of control commands to devices. For example, consider a communication bandwidth restricted scenario where the building operator can only issue five control commands every minute. In such a scenario, turning off five 5 kW devices achieves greater load curtailment, as opposed to turning off five 1.5 kW devices. In other words, devices with higher rated power have more favorable bandwidth scores. Let us denote by Prn the power rating of the n th device. Then the bandwidth score for the n th device is defined by
(6)
where we use logarithms to circumvent the issue that ACs and PEVs draw significantly higher power than the plug-loads. However, for similarly rated devices, the use of logarithm is avoidable.

2.2 Data-Driven Predictive Modeling.

It is necessary for the multi-criteria decision algorithm, described later in this article, to have access to probabilistic models that can forecast the impact of certain selection decisions on the performance of the devices across criteria. Although device level metering and occupancy measurement are not ubiquitous currently, especially in residential buildings, the sub-metering trends in the buildings industry [18] project a future where those will be available. We rely on simulation models (described in Sec. 3) to generate the synthetic time-series data for device-specific power consumption measurements and other relevant contextual information such as occupancy, indoor temperature, battery SoC, etc. Using the scoring methodology described earlier, time-series sequences of the performance scores are also generated. Collection of such “historical” data-sets, including time-series sequences of power measurements, occupancy, performance scores, and other contextual information, are used to develop stochastic (Markov) models for prediction of device behavior as a consequence of selection choices.

In particular, we propose hidden Markov models (HMMs) to better capture the effect of various contextual but (often) unobservable factors, such as end-user activities, on the device utilization and performance. We define an “activity scenario” as an observed state of operation of the set of devices, in which each device is drawing a certain power. For example, consider two devices A and B, each having two possible states of power consumption, 1 (i.e., drawing power) and 0 (i.e., not drawing any power). The corresponding set of activity scenarios would be (0,0), (0,1), (1,0), (1,1). We recognize that in reality the power consumed by devices does not have deterministic values and can vary over a spectrum depending on time or type of use. Therefore, with realistic data, activity scenarios can be defined based on a clustering algorithm performed over the power consumption data for all devices. In this paper, however, we assume that the devices can only have binary operating states (un{0,1}). The “activity scenarios” are states to an HMM in which the operation of each device is assumed to follow a Markov process with unobservable (i.e., hidden) states. The objective of solving the HMM is to learn the probabilistic behavior of the transitions between activity scenarios in each building using the streaming power consumption data. At the start of each sampling period, a maximum-likelihood estimation method such as the Viterbi [19,20] or the Baum–Welch algorithm [21] is used to estimate the transition probabilities between the activity scenarios. The time-series data with continuous values (e.g., room temperature, battery SoC, performance scores) are discretized appropriately (e.g., comfort scores discretized to 10 values within 0 and 1) to construct the states of the Markov model. Once identified, the HMMs are used to predict the device operation, including the forecast of the performance scores as a consequence of the selection choices (e.g., committing an AC to load curtailment). These predictive models of performance scores are used in the prioritization algorithm, as described next.

2.3 Prioritization Algorithm.

We apply a stochastic MCDM algorithm used in Refs. [16,17] for prioritizing alternatives under uncertainties. The key idea behind stochastic ranking is the use of probability theory to rank alternatives based on their predicted scores using a finite number of measurable and independent ranking criteria, where the scores evolve according to some stochastic process. Motivated by the work in Ref. [10], we extend the application of stochastic MCDM algorithm for prioritizing a wide variety of end-use electrical loads (including plug-loads, PEVs, as well as thermostatic loads) which are ranked using the following criteria: (i) comfort, (ii) reliability, and (iii) bandwidth. Suppose there are N devices in a building, N:={1,,N} be the index set for devices, and nN denotes the n th device. Similarly, let A be the number of device criteria (A = 3 in this work), A:={1,,A} be the index set for criteria, and aA be the a th criterion. Let Xan denote the random value representing the performance score of candidate device nN evaluated against criterion aA, and let Xn:=(Xan:aA) be the vector of scores for device n against all criteria. It is assumed that Xan and Xbn are mutually independent for all a,bA, and Xan is finite and discrete for all aA,nN. Denote πan() to be the probability mass function (p.m.f.) of Xan. Let Ean>m be the event that device n outscores device m based on the criterion a. The event Ean>m is stated to take place

  • with probability 1, if Xan>Xam occurs;

  • with probability 0.5, if Xan=Xam occurs; and

  • with probability 0, otherwise.

In other words, we have the following probability relation:
(7)
Step 1 (pairwise comparison using single criterion): In this step, an N × N comparison matrix Ca is constructed for each criterion aA, where each off-diagonal entry represents the probability that one alternative is adjudged superior over another with respect to the given criterion. The entry corresponding to row n and column m of Ca is defined as
(8)
Note that the diagonal elements of Ca can be disregarded as comparing a device to itself has no physical interpretation.
Step 2 (pairwise comparison using vector of criteria): Using the probabilities from Step 1, the overall probabilities comparing devices are evaluated using their vector of scores over all criteria. Suppose Xn and Xm be the (random) score vectors of devices n and m, respectively. To analyze overall preference, a value in the score vector of n is compared to the corresponding value in the score vector of m. For each criterion aA and a pair of devices (n, m), define the corresponding indicator variable with respect to the event Ean>m:
Let Gn,m(gan,m:aA) be the vector of all such indicator variables. Note that Gn,m provides an overall summary of relative superiority of the performance of device n over device m across all criteria. For example, a sample realization of (0, 0, 1) of Gn,m indicates that device n is adjudged superior to device m for criteria 1 and 2 but is inferior for criterion 3. Clearly, the number of possible realizations of Gn,m is 2A. The probability of occurrence of the hth realization is denoted by
(9)
where the second equality assumes mutual independence of the criteria, and the subscript h denotes specific sample realization.
Step 3 (classification of possible outcomes): For every ordered pair (n, m), the possible outcomes of score vector comparisons from Step 2 are grouped into three mutually exclusive sets, namely, Most Preferable (S1n,m), Indifferent (S2n,m), and Not Preferable (S3n,m), using a linear classification rule involving criteria weights imposed by a decision-maker a priori (see Ref. [16]). Let us define a (normalized) criteria weights vector as
and a classification threshold ν ∈ (0.5, 1). Each sample realization of Gn,m is then classified as follows:
(10)
Post-classification, for every ordered pair (n, m), the probability that an outcome belongs to class Sin,m, i = 1, 2, 3, is given by
Step 4 (ranking using the decision probabilities): Based on the rule mentioned in Ref. [16], the relative superiority of device n over device m (across all criteria) is evaluated as
which is the sum of the probability from the Most Preferable (S1n,m) set and 50% of the probability from the Indifferent (S2n,m) set. The pairwise relative comparison values rn>m are collected into the composite comparison matrix R = [rn>m], with the diagonal entries set to 1 (and can be disregarded). Recall that the matrices Ca denote the pairwise comparison of superiority of the devices against a single criterion aA, while R denotes the pairwise comparison across all the criteria. Finally, a fitness vector F = [fn] is constructed with each of its elements, henceforth referred to as the fitness values, denoting the mean of the off-diagonal entries in the corresponding row in R, i.e., fn=mnrn>m/(N1). The candidate devices are ranked in the descending order of the elements of the fitness vector F. Note that the fitness value (fn) is an upper bound on the probability of device n being the most superior,2 and therefore gives an optimistic estimate on the likelihood of superiority of that device.

Computational Complexity: The stochastic MCDM algorithm involves algebraic calculations of the pairwise comparison probabilities, for all possible realizations of Gn,m (for any pair n, m). The number of possible realizations of Gn,m, however, grows exponentially with the number of criteria. As long as a reasonably small number of criteria are used, which are both relevant and applicable to multiple types of devices (plug-loads, AC, PEV, etc.), the algorithm remains computationally tractable. In particular, as we will see in Sec. 3, the MCDM algorithm has a polynomial (quadratic) complexity, O[N2], while the brute-force enumeration offers an exponential complexity, O[10N].

3 Numerical Results

To demonstrate the proposed load prioritization approach, we consider a single-family residential home consisting of a variety of end-use devices regarded as candidates for participating in a curtailment program: a PEV, a residential AC, lights, and other plug-loads, short-listed based on their annual impact to the US annual energy consumption as reported in the study [22]. Table 1 presents a list of such devices, along with their location within the house and their typical power ratings. In this work, the devices are assumed to draw power only at their rated values. Nevertheless, an extension to devices with variable power consumption is possible with a corresponding extension to the activity scenarios. Synthetic time-series data of the devices’ power measurements, performance scores, and other contextual information are generated using the following simulation models.

Table 1

List of residential appliances considered for ranking

DeviceService locationRated power (W)
ACWhole building3500 (from Ref. [23])
LightsLiving room47 (from Ref. [24])
Ceiling fanBedroom71 (from Ref. [22])
RefrigeratorKitchen225 (from Ref. [23])
PEV chargerOutdoor5000 (from Ref. [25])
DeviceService locationRated power (W)
ACWhole building3500 (from Ref. [23])
LightsLiving room47 (from Ref. [24])
Ceiling fanBedroom71 (from Ref. [22])
RefrigeratorKitchen225 (from Ref. [23])
PEV chargerOutdoor5000 (from Ref. [25])

3.1 Synthetic Data Generation.

We used the following models to generate the synthetic data for illustration of the proposed ranking methodology.

Residential Air-Conditioner: The time-series of the on/off state (un{0,1}) of the residential AC was generated as follows. A discrete three-parameter Weibull distribution proposed in Ref. [26] was used to model the probability of the AC being running (controlled via manual wall switch) at any given time, given the (estimated) occupancy at that moment. A one-mass equivalent thermal parameter model was used to represent the thermal dynamics of the indoor temperature [12]. Note that this simpler model was used only to generate sufficient data to drive the modeling effort, but the rest of the proposed prioritization framework is independent of this modeling assumption. The actual operational state, un, of the AC is determined by two factors: (1) whether or not the AC is running, determined by the (expected) occupancy in a probabilistic sense and (2) when running, the on/off status of the thermostat determined by the local controller which maintains the indoor temperature within acceptable range of the temperature set-point, given varying outside air temperature. The TMY3 dataset [27] was used to generate the outdoor air temperature profiles considering the summer months in Phoenix, AZ. The corresponding time-series of the performance scores is computed using the methodology described in Sec. 2.1.

Plug-Loads: Each plug-load device n under consideration is intuitively assigned a probability of usage P(un = 1|h) based on hour-of-day h, for a work-day scenario. The occupancy patterns from a single-family home reported in Ref. [28] form the basis for the profiles shown in Fig. 3. For example, home refrigerators are usually never turned off and have a high probability of usage throughout the day, while living room lights are typically “on” in the evening.

Fig. 3
Hourly usage probabilities for selected plug-loads
Fig. 3
Hourly usage probabilities for selected plug-loads
Close modal

To generate a wide variety of activity scenarios, multiple test runs are carried out using a random generator to assign the rated power to the time-series consumption data, Pn[h]=Prn, with a probability P(un = 1|h). Time-series of the performance scores are computed from the usage probabilities and the power consumption data, as described in Sec. 2.1.

Plug-in Electric Vehicle: PEVs are assumed to be in use during the day-time and plugged-in for charging overnight. The vehicle is assumed to be plugged into a wall-charger at the residential building at around 6 p.m., with the battery SoC between 15% and 25%. It is assumed that the vehicle is expected to be charged to at least 85% of its battery capacity, by 8 a.m. in the morning. The charging process is assumed to follow a simple linear model, as described in Ref. [29]. While the charging powers and battery sizes vary quite a lot [25], we assume those values to be 5 kW and 40 kW h, respectively, for data generation in this paper. The PEV is modeled to draw power close to its rated power (with small variations of less than ±5% driven by fluctuations in grid voltage) whenever plugged-in, unless the battery is fully charged. Time-series of the performance scores are calculated using the methodology in Sec. 2.1.

3.2 Multi-Criteria Prioritization.

Now we present results from an application of the prioritization algorithm described in Sec. 2.3. In order to evaluate the performance of the prioritization schemes, closed-loop simulations are performed whereby a given (selected a priori) number of top-ranked devices are instructed to curtail power until the next ranking update. First we illustrate the working of the algorithm on a typical summer work-day scenario, for a couple of choices of the criteria weights, before presenting various results analyzing the performance of the algorithm. Closed-loop performance of the algorithm is evaluated using the following metrics:
(11)
Note that the first metric is related to the comfort scores, while the second metric is a combination of the reliability and bandwidth scores. The second metric is evaluated considering only the devices picked for curtailment at every ranking update; the first one is calculated considering all the devices.

Illustration of the Ranking Algorithm: To illustrate the working on the ranking algorithm, we present time-series data from closed-loop simulations from 6 p.m. to 8 a.m. (next day) on a typical work-day. Ranks are updated every 15 min, while measurements (and scores) are available at every 5 min intervals. Two different choices of criteria weights are considered:

  • Case 1: W = (0.5, 0.3, 0.2) (comfort, reliability, bandwidth);

  • Case 2: W = (0.3, 0.5, 0.2);

and the two top-most ranked devices are picked for curtailment at every ranking update. Simulation results from case 1 are shown in Figs. 410, with case 2 results in Figs. 11 and 12.

Fig. 4
Time-series plot to illustrate the ranking profiles for a typical summer work-day scenario, during the occupancy hours (6 p.m.–8 a.m.) for a particular choice of criteria weights: W = (0.5, 0.3, 0.2), referred to as the case 1
Fig. 4
Time-series plot to illustrate the ranking profiles for a typical summer work-day scenario, during the occupancy hours (6 p.m.–8 a.m.) for a particular choice of criteria weights: W = (0.5, 0.3, 0.2), referred to as the case 1
Close modal
Fig. 5
Time-series plot to illustrate the power usage profiles and comfort scores for the refrigerator during the occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Fig. 5
Time-series plot to illustrate the power usage profiles and comfort scores for the refrigerator during the occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Close modal
Fig. 6
Time-series plot to illustrate the power usage profiles and comfort scores for the AC during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Fig. 6
Time-series plot to illustrate the power usage profiles and comfort scores for the AC during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Close modal
Fig. 7
Time-series plot to illustrate the power usage profiles and comfort scores for the PEV during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Fig. 7
Time-series plot to illustrate the power usage profiles and comfort scores for the PEV during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Close modal
Fig. 8
Time-series plot to illustrate the power usage profiles and comfort scores for the lights during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Fig. 8
Time-series plot to illustrate the power usage profiles and comfort scores for the lights during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Close modal
Fig. 9
Time-series plot to illustrate the power usage profiles and comfort scores for the ceiling fan during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Fig. 9
Time-series plot to illustrate the power usage profiles and comfort scores for the ceiling fan during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Close modal
Fig. 10
Time-series plot to illustrate the fitness values associated with the ranks during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Fig. 10
Time-series plot to illustrate the fitness values associated with the ranks during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 1 with W = (0.5, 0.3, 0.2)
Close modal
Fig. 11
Time-series plot to illustrate the ranking profiles for a typical summer work-day scenario during the occupancy hours (6 p.m.–8 a.m.) for a particular choice of criteria weights: W = (0.3, 0.5, 0.2), referred to as the case 2
Fig. 11
Time-series plot to illustrate the ranking profiles for a typical summer work-day scenario during the occupancy hours (6 p.m.–8 a.m.) for a particular choice of criteria weights: W = (0.3, 0.5, 0.2), referred to as the case 2
Close modal
Fig. 12
Time-series plot to illustrate the fitness values associated with the ranks during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 2 with W = (0.3, 0.5, 0.2)
Fig. 12
Time-series plot to illustrate the fitness values associated with the ranks during occupancy hours (6 p.m.–8 a.m.) in the summer work-day for case 2 with W = (0.3, 0.5, 0.2)
Close modal

We first discuss case 1 (Figs. 410). Clearly, refrigerators are always ranked low due to the expected adverse impact of their curtailment on the “comfort scores,” as reflected in their very high usage probability throughout the day (Fig. 5) to help keep the freshness and health of the food inside. ACs are lower ranked early in the evening (around 6–9 p.m.) due to their need to stay ON to bring the warm indoor temperature close to the set-point, after which the AC is typically ranked high alternatively (occupying ranks 1–3) until 8 a.m. It is to be noted here that it is expected that the AC is OFF during the day when the house is typically unoccupied, leading to increase in the indoor temperature prior to 6 p.m. Figure 6 shows the evolution of outdoor and indoor temperatures, as well as the comfort scores and the utilization of the AC. The PEV, on the other hand, is ranked high in the evening (until after midnight) until when the time left to reach the target SoC becomes low enough such that the PEV has to start charging (Fig. 7) and be unavailable for curtailment, reflecting in lower ranks. The lights and the ceiling fan present a couple of interesting insights into the algorithm. Lights are lower ranked early in the evening, because of their high usage probability at that time (Fig. 8), while moving up the ranking order afterwards. Note that, during the overnight hours the lights are ranked between 1 and 2, even though they are usually turned OFF, leading to low reliability scores—reflecting a lack of sufficient number of devices during overnight hours to provide curtailment. On a similar note, ceiling fans are ranked 2 early in the evening (higher than AC, refrigerator, and lights), even though fans are rarely used during those hours, leading to poor reliability scores (but no adverse impact on comfort). Fans continue to occupy top ranks, especially during overnight hours when they have high usage probability. The lack of clear choices in devices that can provide curtailment service during the overnight hours is also reflected in the corresponding fitness values during those hours (Fig. 10). High fitness values corresponding to a rank show that the device occupying that rank is indeed a suitable candidate, while a large separation of the fitness value of any ranked device from that of the next ranked device signifies the clarity of the particular ranking assignment. For example, from Fig. 10, it is clear that the choice for rank 1 (which is the PEV) is quite clear and distinct in the early evening hours, while the choices for ranks 2 and 3 are not so clear. During overnight hours, on the other hand, there does not seem to be clear choices for ranks 1–3, which is also reflected in the fluctuating ranking positions (Fig. 4).

In contrast, in case 2 (shown in Figs. 11 and 12), when the reliability is weighted high, the fitness values show clearer separation between the top-ranked devices (Fig. 12), a fact also reflected in the ranking diagram (Fig. 11)—which assigns refrigerator and PEV as top ranks due to their otherwise high probability of being ON. Finally, we evaluate the performance of the ranking algorithm under the two cases using the metrics defined in (11) as
which reflect the choice of criteria weights in the two cases.

Performance Analysis: Next we present a couple of results analyzing the performance of the proposed ranking algorithm with respect to the evaluation metrics (11), as well as its computational efficiency. All the results are from 90 work-days simulations (summer months). In Fig. 13, we show how the ranking performance varies (with respect to the metrics s1 and s2) as we change the criteria weights, with two different choices for the number of top-ranked devices picked for curtailment—1 and 2. As expected, increasing the reliability weights increases the mean curtailment value, and vice versa, while selecting two devices results in lower comfort but higher curtailed power, for the same choice of criteria weights. Next we fix the criteria weights to W = (0, 5, 0.3, 0.2) and evaluate the performance of the stochastic MCDM algorithm when we vary: (a) the number of top-ranked devices picked for curtailment from 1 to 3 and (b) the ranking update frequency from once in every 5 min to once in every 30 min (the ranks are held fixed between two consecutive updates). The results are summarized in Table 2. The comfort values decrease, with an increase in the mean curtailment values, as we commit higher number of devices for curtailment, while the ranking update frequency does not seem to impact the performance. MCDM algorithm out-performs a randomized ranking method (i.e., devices are assigned a rank arbitrarily) on both the metrics, with a “baseline” scenario showing the (best-case) mean comfort value expected when no device participates in curtailment services. Finally, we compare the proposed algorithm with the computational complexity of the explicit (brute-force) enumeration. A comparison between the two algorithms is illustrated in Fig. 14. The MCDM algorithm attains polynomial complexity with respect to the number of candidate devices, O[N2], taking about 30 ms to compute ranks of 100+ devices (computational time statistics obtained from 100 samples for each N). On the other hand, the brute-force method scales exponentially, O[10N], taking about 15 s when N = 7, making it inapplicable to realistically sized problems.

Fig. 13
Impact of criteria weights on ranking performance
Fig. 13
Impact of criteria weights on ranking performance
Close modal
Fig. 14
Computational complexity of the stochastic MCDM algorithm (quadratic) compared to explicit enumeration (exponential)
Fig. 14
Computational complexity of the stochastic MCDM algorithm (quadratic) compared to explicit enumeration (exponential)
Close modal
Table 2

Ranking performance under varying scenarios

No. of selectionsRank update (min)Ranking performance evaluation (s1, s2)
RandomMCDMBaseline
15(0.80, 0.69 kW)(0.80, 1.93 kW)(0.89, 0 kW)
15(0.80, 0.66 kW)(0.81, 1.47 kW)
30(0.80, 0.68 kW)(0.81, 1.38 kW)
25(0.70, 1.67 kW)(0.75, 2.40 kW)
15(0.69, 1.59 kW)(0.77, 1.93 kW)
30(0.69, 1.74 kW)(0.75, 2.13 kW)
35(0.57, 3.00 kW)(0.58, 3.82 kW)
15(0.55, 2.98 kW)(0.59, 3.86 kW)
30(0.56, 2.90 kW)(0.59, 3.83 kW)
No. of selectionsRank update (min)Ranking performance evaluation (s1, s2)
RandomMCDMBaseline
15(0.80, 0.69 kW)(0.80, 1.93 kW)(0.89, 0 kW)
15(0.80, 0.66 kW)(0.81, 1.47 kW)
30(0.80, 0.68 kW)(0.81, 1.38 kW)
25(0.70, 1.67 kW)(0.75, 2.40 kW)
15(0.69, 1.59 kW)(0.77, 1.93 kW)
30(0.69, 1.74 kW)(0.75, 2.13 kW)
35(0.57, 3.00 kW)(0.58, 3.82 kW)
15(0.55, 2.98 kW)(0.59, 3.86 kW)
30(0.56, 2.90 kW)(0.59, 3.83 kW)

4 Conclusions

In this paper, we presented a framework that enables a building owner/operator to effectively value and prioritize loads under temporal uncertainty in building occupancy and other exogenous variables. The proposed load prioritization framework comprises of a modeling step that uses a stochastic (Markov) model to learn the probabilistic behavior of device usage based on power consumption data and a load prioritization step that dynamically ranks building loads using a stochastic MCDM framework. A simulation case study for a residential building scenario was used to demonstrate the proposed prioritization approach for curtailment service. Time-series plots were used to intuitively explain the rationale behind the ranking outcomes, while the impact of various choices (such as selection criteria weights, number of devices to commit, and ranking update frequency) on ranking performance was investigated. Future research efforts would explore the applicability of the proposed framework to commercial buildings and connected communities, and generalize it to grid services beyond curtailment.

Footnote

2

Consider any K random variables {Xi}i=1K. For any pair of k th and l th random variables, we have P(Xk>maxikXi)P(Xk>Xl), from definition. It therefore follows that P(Xk>maxikXi)ikP(Xk>Xi)/(K1).

Acknowledgment

The work was supported by the Building Technologies Office of the U.S. Department of Energy under Contract No. DE-AC05-76RL01830.

Conflict of Interest

There are no conflicts of interest.

References

1.
U.S. Energy Information Administration
,
2018
, “
Annual Energy Outlook 2018
,”
Technical Report, U.S. Department of Energy
.
2.
Office of Energy Efficiency and Renewable Energy
,
2019
, “
Grid-Interactive Efficient Buildings—Overview
,”
Technical Report, U.S. Department of Energy
.
3.
Deng
,
R.
,
Yang
,
Z.
,
Chow
,
M.
, and
Chen
,
J.
,
2015
, “
A Survey on Demand Response in Smart Grids: Mathematical Models and Approaches
,”
IEEE Trans. Ind. Inform.
,
11
(
3
), pp.
570
582
. 10.1109/TII.2015.2414719
4.
Paterakis
,
N. G.
,
Erdinç
,
O.
, and
Catalão
,
J. P.
,
2017
, “
An Overview of Demand Response: Key-Elements and International Experience
,”
Renew. Sustain. Energy Rev.
,
69
, pp.
871
891
. 10.1016/j.rser.2016.11.167
5.
Nutaro
,
J.
,
Ozmen
,
O.
,
Sanyal
,
J.
,
Fugate
,
D.
, and
Kuruganti
,
T.
,
2016
, “
Simulation Based Design and Testing of a Supervisory Controller for Reducing Peak Demand in Buildings
,”
Proceedings of the 4th International High Performance Buildings Conference at Purdue
,
West Lafayette, IN
,
July 11–14
.
6.
Azar
,
A. G.
,
Olivero
,
E.
,
Hiller
,
J.
,
Lesch
,
K.
,
Jiao
,
L.
,
Kolhe
,
M.
,
Asanalieva
,
N.
,
Ferrez
,
P.
,
Zhang
,
Q.
,
Jacobsen
,
R.
, and
Siegl
,
H. S.
,
2015
, “
Algorithms for Demand Response and Load Control
,”
SEMIAH, SEMIAH-WP5-D5.1-v0.75, Technical Report
.
7.
Weng
,
T.
,
Balaji
,
B.
,
Dutta
,
S.
,
Gupta
,
R.
, and
Agarwal
,
Y.
,
2011
, “
Managing Plug-Loads for Demand Response Within Buildings
,”
Proceedings of the Third ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Buildings, BuildSys’11
,
Seattle, WA
,
November
, ACM, pp.
13
18
.
8.
Espinosa
,
L. A. D.
,
Almassalkhi
,
M.
,
Hines
,
P.
, and
Frolik
,
J.
,
2017
, “
Aggregate Modeling and Coordination of Diverse Energy Resources Under Packetized Energy Management
,”
2017 IEEE 56th Annual Conference on Decision and Control (CDC)
,
Melbourne, Australia
,
Dec. 12–15
, IEEE, pp.
1394
1400
.
9.
Nardelli
,
P. H. J.
,
Alves
,
H.
,
Pinomaa
,
A.
,
Wahid
,
S.
,
Tomé
,
M. D. C.
,
Kosonen
,
A.
,
Kühnlenz
,
F.
,
Pouttu
,
A.
, and
Carrillo
,
D.
,
2019
, “
Energy Internet Via Packetized Management: Enabling Technologies and Deployment Challenges
,”
IEEE Access
,
7
, pp.
16909
16924
. 10.1109/ACCESS.2019.2896281
10.
Vivekananthan
,
C.
, and
Mishra
,
Y.
,
2015
, “
Stochastic Ranking Method for Thermostatically Controllable Appliances to Provide Regulation Services
,”
IEEE Trans. Power Syst.
,
30
(
4
), pp.
1987
1996
. 10.1109/TPWRS.2014.2353655
11.
Kim
,
W.
,
Katipamula
,
S.
,
Lutes
,
R. G.
, and
Underhill
,
R. M.
,
2016
, “
Behind the Meter Grid Services: Intelligent Load Control
,”
Technical Report, Pacific Northwest National Lab
.
12.
Hao
,
H.
,
Sanandaji
,
B. M.
,
Poolla
,
K.
, and
Vincent
,
T. L.
,
2015
, “
Aggregate Flexibility of Thermostatically Controlled Loads
,”
IEEE Trans. Power Syst.
,
30
(
1
), pp.
189
198
. 10.1109/TPWRS.2014.2328865
13.
Nandanoori
,
S. P.
,
Kundu
,
S.
,
Vrabie
,
D.
,
Kalsi
,
K.
, and
Lian
,
J.
,
2018
, “
Prioritized Threshold Allocation for Distributed Frequency Response
,”
2018 IEEE Conference on Control Technology and Applications
,
Copenhagen, Denmark
,
Aug. 21–24
, pp.
237
244
.
14.
Hu
,
X.
, and
Nutaro
,
J.
,
2020
, “
A Priority-Based Control Strategy and Performance Bound for Aggregated HVAC-Based Load Shaping
,”
IEEE Trans. Smart Grid
,
11
(
5
), pp.
4133
4143
. 10.1109/TSG.2020.2977203
15.
Jin
,
X.
,
Baker
,
K.
,
Christensen
,
D.
, and
Isley
,
S.
,
2017
, “
Foresee: A User-Centric Home Energy Management System for Energy Efficiency and Demand Response
,”
Appl. Energy
,
205
, pp.
1583
1595
. 10.1016/j.apenergy.2017.08.166
16.
Fan
,
Z.-P.
,
Liu
,
Y.
, and
Feng
,
B.
,
2010
, “
A Method for Stochastic Multiple Criteria Decision Making Based on Pairwise Comparisons of Alternatives With Random Evaluations
,”
Eur. J. Oper. Res.
,
207
(
2
), pp.
906
915
. 10.1016/j.ejor.2010.05.032
17.
Hwang
,
C.-L.
, and
Yoon
,
K.
,
2012
,
Multiple Attribute Decision Making: Methods and Applications
, Vol.
186
,
Springer Verlag
,
New York
.
18.
Bloom
,
E.
, and
Gohn
,
B.
,
2012
, “
Smart Buildings: Ten Trends to Watch in 2012 and Beyond
,”
Pike Research: CleanTech Market Intelligence
.
19.
Viterbi
,
A.
,
1971
, “
Convolutional Codes and Their Performance in Communication Systems
,”
IEEE Trans. Commun. Technol.
,
19
(
5
), pp.
751
772
. 10.1109/TCOM.1971.1090700
20.
Forney
,
G. D.
,
1973
, “
The Viterbi Algorithm
,”
Proc. IEEE
,
61
(
3
), pp.
268
278
. 10.1109/PROC.1973.9030
21.
Baum
,
L. E.
,
Petrie
,
T.
,
Soules
,
G.
, and
Weiss
,
N.
,
1970
, “
A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains
,”
Anna. Math. Stat.
,
41
(
1
), pp.
164
171
. 10.1214/aoms/1177697196
22.
Navigant Consulting Inc.
,
2013
, “
Analysis and Representation of Miscellaneous Electric Loads in NEMS
,”
Technical Report, U.S. Energy Information Administration
.
23.
Efficiency
,
E.
,
2005
, “
Estimating Appliance and Home Electronic Energy Use
,”
US Department of Energy
.
24.
Gifford
,
W. R.
,
Goldberg
,
M. L.
,
Tanimoto
,
P. M.
,
Celnicker
,
D. R.
, and
Poplawski
,
M. E.
,
2012
, “
Residential Lighting End-Use Consumption Study: Estimation Framework and Initial Estimates
,”
Technical Report, Pacific Northwest National Lab. (PNNL), Richland, WA
.
25.
Battery University
,
2019
, “
BU-1003: Electric Vehicle (EV)
,” https://batteryuniversity.com/learn/article/electric_vehicle_ev, Accessed June 28, 2019.
26.
Ren
,
X.
,
Yan
,
D.
, and
Wang
,
C.
,
2014
, “
Air-Conditioning Usage Conditional Probability Model for Residential Buildings
,”
Build. Environ.
,
81
, pp.
172
182
. 10.1016/j.buildenv.2014.06.022
27.
Wilcox
,
S.
, and
Marion
,
W.
,
2008
, “
Users Manual for TMY3 Data Sets
,”
Technical Report, U.S. Department of Energy, National Renewable Energy Laboratory, Golden, CO.
28.
Hendron
,
R.
, and
Engebrecht
,
C.
,
2010
, “
Building America House Simulation Protocols
,”
Technical Report, Office of Energy Efficiency and Renewable Energy (EERE), Washington, DC
.
29.
Han
,
S.
,
Han
,
S.
, and
Sezaki
,
K.
,
2010
, “
Development of an Optimal Vehicle-to-Grid Aggregator for Frequency Regulation
,”
IEEE Trans. Smart Grid
,
1
(
1
), pp.
65
72
. 10.1109/TSG.2010.2045163