Quantitative preference models are used to predict customer choices among design alternatives by collecting prior purchase data or survey answers. This paper examines how to improve the prediction accuracy of such models without collecting more data or changing the model. We propose to use features as an intermediary between the original customer-linked design variables and the preference model, transforming the original variables into a feature representation that captures the underlying design preference task more effectively. We apply this idea to automobile purchase decisions using three feature learning methods (principal component analysis (PCA), low rank and sparse matrix decomposition (LSD), and exponential sparse restricted Boltzmann machine (RBM)) and show that the use of features offers improvement in prediction accuracy using over 1 million real passenger vehicle purchase data. We then show that the interpretation and visualization of these feature representations may be used to help augment data-driven design decisions.

Introduction

Much research has been devoted to develop design preference models that predict customer design choices. A common approach is to (i) collect a large database of previous purchases that includes customer data, e.g., age, gender, income, and purchased product design data, e.g., no. of cylinders, length, and curb weight—for an automobile, and (ii) statistically infer a design preference model that links customer and product variables, using conjoint analysis or discrete choice analysis such as logit, mixed logit, and nested logit models [1,2].

However, a customer may not purchase a vehicle solely due to interactions between these two sets of variables, e.g., a 50-yr old male prefers six-cylinder engines. Instead, a customer may purchase a product for more “meaningful” design attributes that are functions of the original variables, such as environmental sustainability or sportiness [3,4]. These meaningful intermediate functions of the original variables, both of the customer and of the design, are hereafter termed features. We posit that using customer and product features, instead of just the original customer and product variables, may increase the prediction accuracy of the design preference model.

Our goal then is to find features that improve this preference prediction accuracy. To this end, one common approach is to ask design and marketing domain experts to choose these features intuitively, such as a design's social context [5] and visual design interactions [6]. For example, eco-friendly vehicles may be a function of miles per gallon (MPG) and emissions, whereas environmentally active customers may be a function of age, income, and geographic region. An alternative explored in this paper is to find features “automatically” using feature learning methods studied in computer science and statistics. As shown in Fig. 1, feature learning methods create an intermediate step between the original data and the design preference model by forming a more efficient “feature representation” of the original data. Certain well-known methods such as PCA may be viewed similarly, but more recent feature learning methods have shown impressive results in 1D waveform prediction [7] and 2D image object recognition [8].

Fig. 1
The concept of feature learning as an intermediate mapping between variables and a preference model. The diagram on top depicts conventional design preference modeling (e.g., conjoint analysis) where an inferred preference model discriminates between alternative design choices for a given customer. The diagram on bottom depicts the use of features as an intermediate modeling task.
Fig. 1
The concept of feature learning as an intermediate mapping between variables and a preference model. The diagram on top depicts conventional design preference modeling (e.g., conjoint analysis) where an inferred preference model discriminates between alternative design choices for a given customer. The diagram on bottom depicts the use of features as an intermediate modeling task.
Close modal

We conduct an experiment on automobile purchasing preferences to assess whether three feature learning methods increase design preference prediction accuracy: (1) principle component analysis, (2) LSD, and (3) exponential family sparse RBMs [9,10]. We cast preference prediction as a binary classification task by asking the question: “Given customer x, do they purchase vehicle p or vehicle q?” Our dataset is comprised of 1,161,056 data points generated from 5582 real passenger vehicle purchases in the United States during model year 2006 (MY2006).

The first contribution of this work is an increase of preference prediction accuracy by 2–7% just using simple “single-layer” feature learning methods, as compared with the original data representation. These results suggest features indeed better represent the customer's underlying design preferences, thus offering deeper insight to inform decisions during the design process. Moreover, this finding is complementary to recent work in crowdsourced data gathering [11,12] and nonlinear preference modeling [13,14] since they do not affect the preference model or dataset itself.

The second contribution of this work is to show how features may be used in the design process. We show that feature interpretation and feature visualization offer designers additional tools for augmenting design decisions. First, we interpret the most influential pairings of vehicle features and customer features to the preference task, and contrast this with the same analysis using the original variable representation. Second, we visualize the theoretically optimal vehicle for a given customer within the learned feature representation and show how this optimal vehicle, which does not exist, may be used to suggest design improvements upon current models of vehicles that do exist in the market.

Fig. 2
The concept of principle component analysis shown using an example with a data point represented by three original variables x projected to a two-dimensional subspace spanned by w to obtain features h
Fig. 2
The concept of principle component analysis shown using an example with a data point represented by three original variables x projected to a two-dimensional subspace spanned by w to obtain features h
Close modal
Fig. 3
The concept of LSD using an example “part-worth coefficients” matrix of size 10 × 10 decomposed into two 10 × 10 matrices with low rank or sparse structure. Lighter colors represent larger values of elements in each decomposed matrix.
Fig. 3
The concept of LSD using an example “part-worth coefficients” matrix of size 10 × 10 decomposed into two 10 × 10 matrices with low rank or sparse structure. Lighter colors represent larger values of elements in each decomposed matrix.
Close modal

Methodological contributions include being the first to use recent feature learning methods on heterogeneous design and marketing data. Recent feature learning research has focused on homogeneous data, in which all variables are real-valued numbers such as pixel values for image recognition [8,15]; in contrast, we explicitly model the heterogeneous distribution of the input variables, for example, “age” being a real-valued variable and “General Motors” being a categorical variable. Subsequently, we give a number of theoretical extensions: First, we use exponential family generalizations for the sparse RBMs, enabling explicit modeling of statistical distributions for heterogeneous data. Second, we derive theoretical bounds on the reconstruction error of the LSD feature learning method.

This paper is structured as follows: Section 2 discusses efforts to increase prediction accuracy by the design community, as well as feature learning advances in the machine learning community. Section 3 sets up the preference prediction task as a binary classification problem. Section 4 details three feature learning methods and their extension to suit heterogeneous design and market data. Section 5 details the experimental setup of the preference prediction task, followed by results showing improvement of preference prediction accuracy. Section 6 details how features may be used to inform design decisions through feature interpretation and feature visualization. Section 7 concludes this work.

Background and Related Work

Design preference modeling has been investigated in design for market systems, where quantitative engineering and marketing models are linked to improve enterprise-wide decision making [1618]. In such frameworks, the design preference model is used to aggregate input across multiple stakeholders, with special importance on the eventual customer within the targeted market segment [19].

These design preference models have been shown to be especially useful for the design of passenger vehicles, as demonstrated across a variety of applications such as engine design [20], vehicle packaging [21], brand recognition [22], and vehicle styling [3,6,23]. Connecting many of these research efforts is the desire for improved prediction accuracy of the underlying design preference model. With increased prediction accuracy, measured using “held out” portions of the data, greater confidence may be placed in the fidelity of the resulting design conclusions.

Efforts to improve prediction accuracy involve (i) developing more complex statistical models to capture the heterogeneous and stochastic nature of customer preferences; examples include mixed and nested logit models [1,2], consideration sets [24], and kernel-based methods [13,14,25], and (ii) creating adaptive questionnaires to obtain stated information more efficiently using a variety of active learning methods [26,27].

This work is different from (i) above in that the set of features learned is agnostic of the particular preference model used. One can just as easily switch out the l2 logit design preference model used in this paper for another model, whether it be mixed logit or a kernel machine. This work is also different from (ii) in that we are working with a set of revealed data on actual vehicle purchases, rather than eliciting this data through a survey. Accordingly, this work is among recent efforts toward data-driven approaches in design [28], including design analytics [29] and design informatics [30], in that we are directly using data to augment existing modeling techniques and ultimately suggest actionable design decisions.

Feature Learning.

Feature learning methods capture statistical dependencies implicit in the original variables by “encoding” the original variables in a new feature representation. This representation keeps the number of data the same while changing the length of each data point from M variables to K features. The idea is to minimize an objective function defining the reconstruction error between the original variables and their new feature representation. If this representation is more meaningful for the discriminative design preference prediction task, we can use the same supervised model (e.g., logit model) as before to achieve higher predictive performance. More details are given in Sec. 4.

The first feature learning method we examined is PCA. While not conventionally referred to as a feature learning method, PCA is chosen for its ubiquitous use and its qualitative difference from the other two methods. In particular, PCA makes the strong assumption that the data are Gaussian noise distributed around a linear subspace of the original variables, with the goal of learning the eigenvectors spanning this subspace [31]. The features in our case are the coefficients of the original variables when projected onto this subspace or, equivalently, the inner product with the learned eigenvectors.

The second feature learning method is LSD. This method is chosen as it defines the features implicitly within the preference model. In particular, LSD decomposes the “part-worth” coefficients contained in the design preference model (e.g., conjoint analysis or discrete choice analysis) into a low-rank matrix plus a sparse matrix. This additive decomposition is motivated by results from the marketing literature suggesting that certain purchase consideration is linearly additive [32], and thus well captured by decomposed matrices [33]. An additional motivation for a linear decomposition model is the desire for interpretability [34]. Predictive consumer marketing oftentimes uses these learned coefficients to work hand-in-hand with engineering design to generate competitive products or services [35]. Such advantages are bolstered by separation of factors captured by matrix decomposition, as separation may lead to better capture of heterogeneity among market segments [36]. Readers are referred to Ref. [37] for further in-depth discussion.

The third feature learning method is the exponential family sparse RBM [9,38]. This method is chosen as it explicitly represents the features, in contrast with the LSD. The method is a special case of a Boltzmann machine, an undirected graph model in which the energy associated within an energy state space defines the probability of finding the system in that state [9]. In the RBM, each state is determined by both visible and hidden nodes, where each node corresponds to a random variable. The visible nodes are the original variables, while the hidden nodes are the feature representation. The “restricted” portion of the RBM refers to the restriction on visible–visible connections and hidden–hidden connections, later detailed and depicted in Sec. 4 and Fig. 4, respectively.

Fig. 4
The concept of the exponential family sparse RBM. The original data are represented by nodes in the visible layer by [x1,x2], while the feature representation of the same data is represented by nodes in the hidden layer [h1,h2,h3,h4]. Undirected edges are restricted to being only between the original layer and the hidden layer, thus enforcing conditional independence between nodes in the same layer.
Fig. 4
The concept of the exponential family sparse RBM. The original data are represented by nodes in the visible layer by [x1,x2], while the feature representation of the same data is represented by nodes in the hidden layer [h1,h2,h3,h4]. Undirected edges are restricted to being only between the original layer and the hidden layer, thus enforcing conditional independence between nodes in the same layer.
Close modal

All three feature learning methods are considered “simple” in that they are single-layer models. The aforementioned results in 1D waveform speech recognition and 2D image object recognition have been achieved using hierarchical models, built by stacking multiple single-layer models. We chose single-layer feature learning methods here as an initial effort and to explore parameter settings more easily; as earlier noted, there is limited work on feature learning methods for heterogeneous data (e.g., categorical variables) and most advances are currently only on homogeneous data (e.g., real-valued 2D image pixels).

Preference Prediction as Binary Classification

We cast the task of predicting a customer's design preferences as a binary classification problem: Given customer j, represented by a vector of heterogeneous customer variables xc(j), as well as two passenger vehicle designs p and q, each represented by a vector of heterogeneous vehicle design variables xd(p) and xd(q), which passenger vehicle will the customer purchase? We use a real dataset of customers and their passenger vehicle purchase decisions as detailed below [39].

Customer and Vehicle Purchase Data From 2006.

The data used in this work combine the Maritz vehicle purchase survey from 2006 [39], the Chrome vehicle variable database [40], and the 2006 estimated U.S. state income and living cost data from the U.S. Census Bureau [41] to create a dataset with both customer and passenger vehicle variables. These combined data result in a matrix of purchase records, with each row corresponding to a separate customer and purchased vehicle pair, and each column corresponding to a variable describing the customer (e.g., age, gender, and income) or the purchased vehicle (e.g., no. of cylinders, length, and curb weight).

From this original dataset, we focus only on the customer group who bought passenger vehicles of size classes between minicompact and large vehicles, thus excluding data for station wagons, trucks, minivans, and utility vehicles. In addition, purchase data for customers who did not consider other vehicles before their purchases were removed, as well data for customers who purchased vehicles for another party.

The resulting database contained 209 unique passenger vehicle models bought by 5582 unique customers. The full list of customer variables and passenger vehicle variables can be found in Tables 1 and 2. The variables in these tables are grouped into three unit types: real, binary, and categorical, based on the nature of the variables.

Table 1

Customer variables xc and their variable types

Customer variableTypeCustomer variableType
AgeRealU.S. state cost of livingReal
Number of house membersRealGenderBinary
Number of small childrenRealIncome bracketCategorical
Number of med. childrenRealHouse regionCategorical
Number of large childrenRealEducation levelCategorical
Number of childrenRealU.S. stateCategorical
U.S. state average incomeReal
Customer variableTypeCustomer variableType
AgeRealU.S. state cost of livingReal
Number of house membersRealGenderBinary
Number of small childrenRealIncome bracketCategorical
Number of med. childrenRealHouse regionCategorical
Number of large childrenRealEducation levelCategorical
Number of childrenRealU.S. stateCategorical
U.S. state average incomeReal
Table 2

Design variables xd and their variable types

Design variableTypeDesign variableType
InvoiceRealAWD/4WDBinary
MSRPRealAutomatic transmissionBinary
Curb weightRealTurbochargerBinary
HorsepowerRealSuperchargerBinary
MPG (combined)RealHybridBinary
LengthRealLuxuryBinary
WidthRealVehicle classCategorical
HeightRealManufacturerCategorical
WheelbaseRealPassenger capacityCategorical
Final driveRealEngine sizeCategorical
DieselBinary
Design variableTypeDesign variableType
InvoiceRealAWD/4WDBinary
MSRPRealAutomatic transmissionBinary
Curb weightRealTurbochargerBinary
HorsepowerRealSuperchargerBinary
MPG (combined)RealHybridBinary
LengthRealLuxuryBinary
WidthRealVehicle classCategorical
HeightRealManufacturerCategorical
WheelbaseRealPassenger capacityCategorical
Final driveRealEngine sizeCategorical
DieselBinary

Choice Set Training, Validation, and Testing Split.

We converted the dataset of 5582 passenger vehicle purchases into a binary choice set by generating all pairwise comparisons between the purchased vehicle and the other 208 vehicles in the dataset for all 5582 customers. This resulted in N = 1,161,056 data points, where each datum indexed by n consisted of a triplet (j, p, q) of a customer indexed by j and two passenger vehicles indexed by p and q, as well as a corresponding indicator variable y(n){0,1} describing which of the two vehicles was purchased.

These full data were then randomly shuffled and split into training, validation, and testing sets. As previous studies have shown the impact on prediction performance given different generations of choice sets [42], we created ten random shufflings and subsequent data splits of our dataset, and run the design preference prediction experimental procedure of Sec. 5 on each one independently. This work is therefore complementary to studies on developing appropriate choice set generation schemes such as Ref. [43]. Full details into the data processing procedure are given in Sec. 5.

Bilinear Design Preference Utility.

We adopt the conventions of utility theory for the measure of customer preference over a given product [44]. Formally, each data point consists of a pairwise comparison between vehicles p and q for customer j, with corresponding customer variables xc(j) for j{1,,5582} and original variables of the two vehicle designs, xd(p) and xd(q) for p,q{1,,209}. We assume a bilinear utility model for customer j and vehicle p
(1)

where ⊗ is an outer product for vectors, vec(·) is vectorization of a matrix, [·,·] is concatenation of vectors, and ω is the part-worth vector.

Design Preference Model.

The preference model refers to the assumed relationship between the bilinear utility model described in Sec. 3.3 and a label indicating which of the two vehicles the customer actually purchased. While the choice of preference model is not the focus of this paper, we pilot tested popularly used models including l1 and l2 logit model, Naïve Bayes, l1 and l2 linear as well as kernelized support vector machine, and random forests.

Based on these pilot results, we chose the l2 logit model due to its widespread use in the design and marketing communities [37,45]; in particular, we used the primal form of the logit model. Equation (2) captures how the logit model describes the probabilistic relationship between customer j's preference for either vehicle p or vehicle q as a function of their associated utilities given by Eq. (1). Note that ϵ are Gumbel-distributed random variables accounting for noise over the underlying utility of the customer j's preference for either vehicle p or vehicle q
(2)

Parameter Estimation.

We estimate the parameters of the logit model in Eq. (2) using conventional convex loss function minimization using the log-loss regularized with the l2 norm
(3)

where y(n)=y(jpq) is 1 if customer j chose vehicle p to purchase and 0 if vehicle q was purchased, and α is the l2 regularization hyperparameter. The optimization algorithm used to minimize this regularized loss function was stochastic gradient descent, with details of hyperparameter settings given in Sec. 5.

Feature Learning

We present three qualitatively different feature learning methods as introduced in Sec. 2: (1) PCA, (2) LSD, and (3) exponential family sparse RBM. Furthermore, we discuss their extensions to better suit the market data described in Sec. 3, as well as derivation of theoretical guarantees.

Principal Component Analysis.

PCA maps the original data representation x=[x1,x2,,xM]TM×1 to a new feature representation h=[h1,h2,,hK]TK×1,KM, with an orthogonal transformation WM×K (Fig. 2). Assume that the original data representation x has zero empirical mean (otherwise, we simply subtract the empirical mean from x). The mapping is given by
(4)
The PCA representation has the following properties: (1) h1 has the largest variance, and the variance of hi is not smaller than the variance of hj for all j < i, (2) the columns of W are orthogonal unit vectors, and (3) h and W minimize the reconstruction error ε
(5)

When the q columns of W consist of the first q eigenvectors of xTx, the above properties are all satisfied, and the PCA feature representation can be calculated by Eq. (4). Since PCA is a projection onto a subspace, the features h in this case are not “higher order” functions of the original variables, but rather a linear mapping from original variables to a strictly smaller number of linear coefficients over the eigenvectors.

Low-Rank + Sparse Matrix Decomposition.

The utility model Urp given in Eq. (1) can be rewritten into matrix form, in which Ω is a matrix reshaped from the part-worth coefficients vector ω
(6)
The decomposition of the original part-worth coefficients into a low-rank matrix and a sparse matrix may better represent customer purchase decisions than the large coefficient matrix of all pairwise interactions given in Eq. (1) and as detailed in Sec. 2. Accordingly, we decompose Ω into a low-rank matrix L of rank r superimposed with a sparse matrix S, i.e., Ω=L+S (Fig. 3). This problem may be solved in the general case exactly with the following optimization problem:
(7)
where Xu and Xc are the full set of customer and vehicle data, y is the vector of whether customer j chose vehicle p or vehicle q, l(·) is the log-loss without the l2 norm
(8)
and C is a convex set corresponding to the sparse matrix S. As this problem is intractable (NP-hard), we instead learn this decomposition of matrices using an approximation obtained via regularized loss function minimization
(9)

where ||·||* is the nuclear norm to promote low-rank structure, and ||·||1 is the l1 norm.

In particular, while a number of low-rank regularizations may be used to solve Eq. (9), e.g., trace norm and log-determinant norm [46]. We choose the nuclear norm as it may be applied to any general matrix, while the trace norm and log-determinant regularization are limited to positive semidefinite matrices. Moreover, the nuclear norm is often considered optimal as ||L||* is the convex envelop of Rank(L), implying that ||L||* is the largest convex function smaller than Rank(L) [46].

Definition 1. For matrixL, the nuclear norm is defined as

wheresi(L)is a singular value ofL.

Parameter Estimation.

The nondifferentiability of the convex low-rank + sparse approximation given in Eq. (9) necessitates optimizations techniques such as augmented Lagrangian [47], semidefinite programing [48], and proximal methods [49]. Due to theoretical guarantees on convergence, we choose to train our model using proximal methods which are defined as follows.

Definition 2. Letf:n{+}be a closed proper convex function. The proximal operator of f is defined as

With these preliminaries, we now detail the proximal gradient algorithm used to solve Eq. (9) using low-rank and l1 proximal operators. Denote f(·)=||·||*, and its proximal operator as proxf. Similarly, denote the proximal operator for the l1 regularization term by proxS, i=1,n. Details of calculating proxf and proxS may be found in the Appendix.

With this notation, the proximal optimization algorithm to solve Eq. (9) is given by Algorithm 1. Moreover, this algorithm is guaranteed to converge with constant step size as given by the following lemma [49].

Lemma 1. Convergence Property. Whenlis Lipschitz continuous with constant ρ, this method can be shown to converge with rateO(1/k)when a fixed step sizeηt=η(0,1/ρ]is used. If ρ is not known, the step sizes ηt can be found by a line search; that is, their values are chosen in each step.

Algorithm 1

Low-Rank + Sparse Matrix Decomposition

Input: Data Xc,Xd, y 
Initialize L0=0,S0=0 
repeat 
Lt+1=proxf(LtηtLtl(L,S;Xc,Xd,y)) 
St+1=proxS(StηtStl(L,S;Xc,Xd,y)) 
untilLt,Sit are converged 
Input: Data Xc,Xd, y 
Initialize L0=0,S0=0 
repeat 
Lt+1=proxf(LtηtLtl(L,S;Xc,Xd,y)) 
St+1=proxS(StηtStl(L,S;Xc,Xd,y)) 
untilLt,Sit are converged 

Error Bound on Low-Rank + Sparse Estimation.

We additionally prove a variational bound that guarantees this parameter estimation method converges to a unique solution with bounded error as given by the following theorem.

Theorem 1. Error bound on low-rank + sparse estimation

whereL*is the optima of problem (9) andL0is the matrix minimizing the loss functionl(·).

The proof of this theorem is given in the Appendix.

Restricted Boltzmann Machine.

The RBM is an energy-based model in which an energy state is defined by a layer of M visible nodes corresponding to the original variables x and a layer of K features denoted as h. The energy for a given pair of original variables and features determines the probability associated with finding the system in that state; like nature, systems tend to states that minimize their energy and thus maximize their probability. Accordingly, maximizing the likelihood of the observed data x(1)x(N)M and its corresponding feature representation h(1)h(N)K is a matter of finding the set of parameters that minimize the energy for all observed data.

While traditionally this likelihood consists of binary variables and binary features, as described in Tables 1 and 2, our passenger vehicle purchase dataset consists of MG Gaussian variables, MB binary variables, and MC categorical variables. We accordingly define three corresponding energy functions EG, EB, and EC, in which each energy function connects the original variables and features via a weight matrix W, as well as biases for each original variable and feature, a and b, respectively.

Real-valued random variables (e.g., vehicle curb weight) are modeled using the Gaussian density. The energy function for Gaussian inputs and binary hidden nodes is
(10)

where the variance term is clamped to unity under the assumption that the input data are standardized.

Binary random variables (e.g., gender) are modeled using the Bernoulli density. The energy function for Bernoulli nodes in both the input layer and hidden layer is
(11)
Categorical random variables (e.g., vehicle manufacturer) are modeled using the categorical density. The energy function for categorical inputs with Zm classes for mth categorical input variable (e.g., Toyota, General Motors, etc.) is given by
(12)

where δmz=1 if xmz = 1 and 0 otherwise.

Given these energy functions for the heterogeneous original variables, the probability of a state with energy E(x,h;θ)=EG(x,h;θ)+EB(x,h;θ)+EC(x,h;θ), in which θ={W,a,b} are the energy function weights and bias parameters, is defined by the Boltzmann distribution
(13)
The “restriction” on the RBM is to disallow visible–visible and hidden–hidden node connections. This restriction results in conditional independence of each individual hidden unit h given the vector of inputs x, and each visible unit x given the vector of hidden units h
(14)
(15)
The conditional density for a single binary hidden unit given the combined KG Gaussian, KB binary, and KC categorical input variables is then
(16)

where σ(s)=1/(1+exp(s)) is a sigmoid function.

For an input data point x(n), its corresponding feature representation h(n) is given by sampling the “activations” of the hidden nodes
(17)

Parameter Estimation.

To train the model, we optimize the weight and bias parameters θ={W,b,a} by minimizing the negative log-likelihood of the data {x(1)x(N)} using gradient descent. The gradient of the log-likelihood is
(18)

The gradient is the difference of two expectations: the first of which is easy to compute since it is “clamped” at the input datum x, but the second of which requires the joint density over the entire x space for the model.

In practice, this second expectation is approximated using the contrastive divergence algorithm by Gibbs, sampling the hidden nodes given the visible nodes, then the visible nodes given the hidden nodes, and iterating a sufficient number of steps for the approximation [50]. During training, we induce sparsity of the hidden layer by setting a target activation βk, fixed to 0.1, for each hidden unit hk [38]. The overall objective to be minimized is then the negative log-likelihood from Eq. (18) and a penalty on the deviation of the hidden layer from the target activation. Since the hidden layer is made up of sigmoid densities, the overall objective function is
(19)

where λ3 is the hyperparameter trading off the sparsity penalty with the log-likelihood.

Experiment

The goal in this experiment was to assess how preference prediction accuracy changes when using the same preference model on three different representations of the same dataset. The preference model used, as discussed in Sec. 3.4 was the l2 logit, while the three representations were the original variables, low-rank + sparse features, and RBM features. The same experimental procedure was run on each of these three representations, where the first representation acts as a baseline for prediction accuracy, and the next two representations demonstrate the relative gain in preference prediction accuracy when using features.

In addition, we performed an analysis of how the hyperparameters affected design preference prediction accuracy for the hyperparameters used in the PCA, LSD, and RBM feature learning methods. For PCA, the hyperparameter was the dimensionality K of the subspace spanned by the eigenvectors of the PCA method. For LSD, the hyperparameters were the rank penalty λ1, which affects the rank of the low-rank matrix L, and the sparsity penalty λ2, which influences the number of nonzero elements in the sparse matrix S, both found in Eq. (9). For RBM, the hyperparameters were the sparsity penalty λ3, which controls the number of features activated for a given input datum, and the overcompleteness factor γ, which defines by what factor the dimensionality of the feature space is larger than the dimensionality of the original variable space, both of which are found in Eq. (19).

The detailed experiment flow is summarized below and illustrated in Fig. 5:

Fig. 5
Data processing, training, validation, and testing flow
Fig. 5
Data processing, training, validation, and testing flow
Close modal
  1. (1)

    The raw choice dataset of pairs of customers and purchased designs, described in Sec. 3.1, was randomly split ten times into 70% training, 10% validation, and 20% test sets. This was done in the beginning to ensure no customers in the training sets ever existed in the validation or test sets.

  2. (2)

    Choice sets were generated for each training, validation, and test sets for all ten randomly shuffled splits as described in Sec. 3.2. This process created a training dataset of 832,000 data points, a validation dataset of 104,000 data points, and a testing dataset of 225,056 data points, for each of the ten shuffled splits.

  3. (3)

    Feature learning was conducted on the training sets of customer variables and vehicle variables for a vector of five different values of K for PCA features, a grid of 25 different pairs of low-rank penalty λ1 and sparsity penalty λ2 for the LSD features, and a grid of 56 different pairs of sparsity λ3 and overcompleteness γ hyperparameters for RBM features. For PCA features, these hyperparameters were K{30,50,70,100,150}. For LSD features, these hyperparameters were λ1{0.005,0.01,0.05,0.1,0.5} and λ2{0.005,0.01,0.05,0.1,0.5}. For RBM, these hyperparameters were λ3{4.0,5.0,6.0,7.0,8.0,9.0,10.0} and γ{0.25,0.5,0.75,1.0,1.5,2.0,2.5,3.0}. These hyperparameter settings were selected by pilot testing large ranges of parameter settings to find relevant regions for upper and lower hyperparameter bounds, with numbers of hyperparameters selected based on computational constraints.

  4. (4)

    Each of the validation and testing datasets were encoded using the feature learning methods learned for each of the 5 PCA hyperparameters K, 25 (λ1,λ2) LSD hyperparameter pairs, and 56 (λ3,γ) RBM hyperparameter pairs.

  5. (5)
    The encoded feature data were combined with the original variable data in order to separate linear term effects of the original variables with higher order effects from the features. While this introduces a degree of information redundancy between features and original variables, the regularization term in Eq. (3) mitigates effects of collinearity. Each datum consists of the features concatenated with the original variables, then input into the bilinear utility model. Specifically, for some customer features hu and customer variables xu, we used huT:=[xuT,huT] to define the new representation of the customer; likewise, for some vehicle features hc and vehicle variables xc, we used hcT:=[xcT,hcT] to define the new representation of the customer. Combined with Eq. (1), a single data point used for training is the difference in utilities between vehicle p and vehicle q for a given customer r
    (20)
    Note that the dimensionality of each datum could range above 100,000 dimensions for the largest values of γ.
  6. (6)

    For each of these training sets, six logit models were trained in parallel over minibatches of the training data, corresponding to six different settings of the l2 regularization parameter α=0.00001,0.0001,0.001,0.01,0.1,1.0. These logit models were optimized using stochastic gradient descent, with learning rates inversely related to the number of training examples seen [51].

  7. (7)

    Each logit model was then scored according to its respective held-out validation dataset. The hyperparameter settings (αBASELINE) for the original variables, (KPCA,αPCA) for PCA feature learning, (λ1,λ2) for LSD feature learning, and (λ3,γ,αRBM) for RBM feature learning with the best validation accuracy were saved. For each of these four sets of best hyperparameters, step (3) was repeated to obtain the set of corresponding features on each of the ten random shuffled training plus validation sets.

  8. (8)

    Logit models corresponding to the baseline, PCA features, LSD features, and RBM features were retrained for each of the ten randomly shuffled and combined training and validation. The prediction accuracy for each of these ten logit models was assessed on the corresponding held-out test sets in order to give average and standard deviations of the design preference predictive accuracy for the baseline, PCA features, LSD features, and RBM features.

Results.

Table 3 shows the averaged test set prediction accuracy of the logit model using the original variables, PCA features, LSD features, and RBM features. Prediction accuracy averaged over ten random training and held-out testing data splits are given, both for the partial data N = 10,000 and the full data N = 1,161,056 cases. Furthermore, we include the standard deviation of the prediction accuracies and a two-sided t-test relative to the baseline accuracy for each feature representation.

Table 3

Averaged preference prediction accuracy on held-out test data using the logit model with the original variables or the three feature representations. Average and standard deviation were calculated from ten random training and testing splits common to each method, while test parameters for each method were selected via cross validation on the training set.

Design preference modelFeature representationPrediction accuracy (SD) (ρ-value) N = 10,000Prediction accuracy (SD) (ρ-value) N = 1,161,056
Logit modelOriginal variables (no features)69.98% (1.82%)75.29% (0.98%)
(N/A)(N/A)
Logit modelPrinciple component analysis61.69% (1.24%)62.03% (0.89%)
(1.081 × 10−7)(8.22 × 10−10)
Logit modelLow-rank + sparse matrix decomposition76.59% (0.89%)77.58% (0.81%)
(3.276 × 108)(4.286 × 10−8)
Logit modelExponential family sparse RBM74.99% (0.64%)75.15% (0.81%)
(2.3 × 10−5)(0.136)
Design preference modelFeature representationPrediction accuracy (SD) (ρ-value) N = 10,000Prediction accuracy (SD) (ρ-value) N = 1,161,056
Logit modelOriginal variables (no features)69.98% (1.82%)75.29% (0.98%)
(N/A)(N/A)
Logit modelPrinciple component analysis61.69% (1.24%)62.03% (0.89%)
(1.081 × 10−7)(8.22 × 10−10)
Logit modelLow-rank + sparse matrix decomposition76.59% (0.89%)77.58% (0.81%)
(3.276 × 108)(4.286 × 10−8)
Logit modelExponential family sparse RBM74.99% (0.64%)75.15% (0.81%)
(2.3 × 10−5)(0.136)

Boldface denotes highest prediction accuracy.

The logit model trained with LSD features achieved the highest predictive accuracy on both the partial and full datasets, at 76.59% and 77.58%, respectively. This gives evidence that using features can improve design preference prediction accuracy as the logit model using the original variables achieved an averaged accuracy of 69.98% and 75.29%, respectively. The improvement in design preference prediction accuracy is greatest for the partial data case, as evidenced by both the LSD and RBM; yet, the improvement with the full data case shows that the LSD feature learning method is still able to improve prediction accuracy within the capacity of the logit model. The RBM results for the full data case do not show significant improvement in prediction accuracy. Finally, we note a relative loss in design preference prediction accuracy when using PCA as a feature learning method, both for the partial and full datasets, suggesting that the heavy assumptions built into PCA are overly restrictive.

The parameter settings for the LSD feature learning method give additional insight to the preference prediction task. In particular, the optimal settings of λ1 and λ2 obtained through cross validation on the ten random training sets were ranged from r = 29 to r = 31. This significantly reduced rank of the part-worth coefficient matrix given in Eq. (1) suggests that the vast majority of interactions between customer variables and design variables given in Tables 1 and 2 do not significantly contribute to overall design preferences. This insight allows us to introspect into important feature pairings on a per-customer basis to inform design decisions.

We have shown that even simple single-layer feature learning can significantly increase predictive accuracy for design preference modeling. This finding signifies that features more effectively capture the design preferences than the original variables, as features form functions of the original variables more representative of the customer's underlying preference task. This offers designers opportunity for new insights if these features can be successfully interpreted and translated to actionable design decisions; however, given the relatively recent advances in feature learning methods, interpretation and visualization of features remains an open challenge—see Sec. 6 for further discussion.

Further increases to prediction accuracy might be achieved by stacking multiple feature learning layers, often referred to as “deep learning.” Such techniques have recently shown impressive results by breaking previous records in image recognition by large margins [8]. Another possible direction for increasing prediction accuracy may be in developing novel architectures that explicitly capture the conditional statistical structure between customers and designs. These efforts may be further aided through better understanding of the limitations of using feature learning methods for design and marketing research. For example, the large number of parameters associated with feature learning methods results in greater computational cost when performing model selection; in addition to the cross-validation techniques used in this paper, model selection metrics such as Bayesian information criteria and Akaike information criteria may give further insight along these lines.

Using Features for Design

Using features can support the design process in at least two directions: (1) Features interpretation can offer deeper insights into customer preferences than the original variables, and (2) feature visualization can lead to a market segmentation with better clustering than with the original variables. These two directions are still open challenges given the relative nascence of feature learning methods. Further investigation is necessary to realize the above design opportunities and to justify the computational cost and implementation challenges associated with feature learning methods.

The interpretation and visualization methods may be used with conventional linear discrete choice modeling (e.g., logit models). However, deeper insights are possible through interpreting and visualizing features, assuming that features capture more effectively the underlying design preference prediction task of the customer as shown through improved prediction accuracy on held-out data. Since we are capturing “functions” of the original data, we are more likely to interpret and visualize feature pairings such as “eco-friendly” vehicle and “environmentally conscious” customer; such pairing may ultimately lead to actionable design decisions.

Feature Interpretation of Design Preferences.

Similar to PCA, LSD provides an approach to interpret the learned features by looking at the linear combinations of original variables. The major difference between features learned using PCA versus LSD is their different linear combinations; in particular, features learned by LSD are more representative as they contain information from both the data distribution and the preference task, while PCA features only contain information from the data distribution.

As introduced in Sec. 4.2, the weight matrix Ω is decomposed into a low-rank matrix L and a sparse matrix S, i.e., Ω=L+S. The nonzero elements in the sparse matrix S may be interpreted as the weight of the product of its corresponding original design variables and customer variables. As for the low-rank matrix L, the features can be extracted by linearly combining the original variable according to the singular value decomposition (SVD) for L. The SVD is a factorization of the (m+1)×n matrix L in the form L=UΣV, where U is a (m+1)×(m+1) unitary matrix, Σ is an m × n rectangular diagonal matrix with non-negative real numbers σ1,σ2,,σmin(m+1,n) on the diagonal, and V is a (n)×(n) unitary matrix. Rewriting Eq. (6) yields
(21)

where ui is the ith column of matrix U, and vi is the ith row of matrix V. The ith user feature [(xc(j))T,1]ui is a linear combination of original user variables; the ith design feature vixdp is a linear combination of original design variables; and σi represents the importance of this pair of features for the customer's design preferences.

Interpreting these features in the vehicle preference case study, we found that the most influential feature pairing (i.e., largest σi) corresponds to preference trends at the population level: Low price but luxury vehicles are preferred, and Japanese vehicles receive the highest preference while GM vehicles receive the lowest preference. The second most influential feature pairing represents a rich customer group, with preferred vehicle groups being both expensive and luxurious. The third most influential feature pairing represents an elder user group, with their preferred vehicles as large but with low net horsepower.

Features Visualization of Design Preferences.

We now visualize features to understand what insights for design decision making. Specifically, we make early stage inroad to visual market segmentation performed in an estimated feature space, thus clustering customers in a representation that better captures their underlying design preference decisions.

We begin by looking at the utility model Urp given in Eq. (1) and note that the inner product between Ω and the variables xu(r) representing customer r may be interpreted as customer r's optimal vehicle, denoted xopt(r)
(22)
where Ωout is the matrix reshaped from the coefficients of Ω corresponding to the outer product given in Eq. (1), Ωmain is the matrix reshaped from the remaining coefficients, and 1 is a vector consisting of 1's with the same dimension as xu(r). We rewrite the utility model Urp given in Eq. (1) in terms of the optimal vehicle xopt(r)
(23)

According to the geometric meaning of inner product, the smaller the angle between xdp and xopt(r) is, the larger will be the utility Urp. In this way, we have an interpretable method of improving upon the actual purchased vehicle design in the form of an “optimal” vehicle vector. This optimal vehicle vector could be useful for a manufacturer developing a next-generation design from a current design, particularly as the manufacturer would target a specific market segment.

We now provide a visual demonstration of using an optimal vehicle derived from feature learning to suggest a design improvement direction. First, we calculate the optimal vehicle using Eq. (22) for every customer in the dataset. Then, we visualize these optimal vehicle points by reducing their dimension using t-distributed stochastic neighbor embedding, an advanced nonlinear dimension reduction technique that embeds similar objects into nearby points [52]. Finally, optimal vehicles from targeted market segments are marked in larger red points.

Figure 6 shows the optimal vehicles for the SCI-XA, MAZDA6, ACURA-TL, and INFINM35 customer groups using red points, respectively. We observe that the optimal vehicle moves from the left-top corner to the right-bottom corner as the purchased vehicles become more luxurious using the LSD features, while the optimal vehicles in the original variable representation show overlap, especially for MAZDA6 and ACURA-TL customers. In other words, we are visualizing what has been shown quantitatively through increased preference prediction accuracy; namely, the optimal vehicles represented using LSD features as opposed to the original variables result in a larger separation of various market segments' optimal vehicles.

Fig. 6
Optimal vehicle distribution visualization. Every point represents the optimal vehicle for one consumer. In the left column, the optimal vehicle is inferred using the utility model with original variables. In the right column, LSD features are used to infer the optimal vehicle. In the first row, the optimal vehicles from SCI-XA customers are marked in big red points. Similarly, the optimal vehicles from MAZDA6, ACURA-TL, and INFINM35 customers are marked in big red points, respectively.
Fig. 6
Optimal vehicle distribution visualization. Every point represents the optimal vehicle for one consumer. In the left column, the optimal vehicle is inferred using the utility model with original variables. In the right column, LSD features are used to infer the optimal vehicle. In the first row, the optimal vehicles from SCI-XA customers are marked in big red points. Similarly, the optimal vehicles from MAZDA6, ACURA-TL, and INFINM35 customers are marked in big red points, respectively.
Close modal

The contribution of this demonstration is not the particular introspection on the chosen example with MAZDA6 and ACURA-TL customers. Instead, this demonstration is significant as it suggests that it is possible to perform feature-based market segmentation purely using visual analysis. Such visual analysis is likely to be more useful to practicing designers and marketers, as it abstracts away the underlying mathematical mechanics of feature learning.

Conclusion

Feature learning is a promising method to improve design preference prediction accuracy without changing the design preference model or the dataset. This improvement is obtained by transforming the original variables to a feature space acting as an intermediate step as shown in Fig. 1. Thus, feature learning complements advances in both data gathering and design preference modeling.

We presented three feature learning methods—PCA, LSD, and sparse exponential family RBMs—and applied them to a design preference dataset consisting of customer and passenger vehicle variables with heterogeneous unit types, e.g., gender, age, and no. of cylinders.

We then conducted an experiment to measure design preference prediction accuracy involving 1,161,056 data points generated from a real purchase dataset of 5582 customers. The experiment showed that feature learning methods improve preference prediction accuracy by 2–7% for a small and full dataset, respectively. This finding is significant, as it shows that features offer a better representation of the customer's underlying design preferences than the original variables. Moreover, the finding shows that feature learning methods may be successfully applied to design and marketing datasets made up of variables with heterogeneous data types; this is a new result as feature learning methods have primarily been applied on homogeneous datasets made up of variables of the same distribution.

Feature interpretation and visualization offer a promise for using features to support the design process. Specifically, interpreting features can give designers deeper insights of the more influential pairings of vehicle features and customer features, while visualization of the feature space can offer deeper insights when performing market segmentation. These new findings suggest opportunities to develop feature learning algorithms that are not only more representative of the customer preference task as measured by prediction accuracy but also easier to interpret and visualize by a domain expert. Methods allowing easier interpretation of features would be valuable when translating the results of more sophisticated feature learning and preference prediction models into actionable design decisions.

Acknowledgment

An earlier conference version of this work appeared at the 2014 International Design Engineering Technical Conference. This work has been supported by the National Science Foundation under Grant No. CMMI-1266184. This support is gratefully acknowledged. The authors would like to thank Bart Frischknecht and Kevin Bolon for their assistance in coordinating datasets, Clayton Scott for useful suggestions, and Maritz Research, Inc., for generously making use of their data possible.

Nomenclature

ak, bm =

bias parameter for RBM units

D(·) =

singular value threshold shrinkage operator

E[·] =

expectation

EG,EB,EC =

energy function for Gaussian, binary, and categorical variables

h =

features

j =

index for arbitrary customer

K =

dimension of features

L =

low-rank matrix

M =

dimension of original variables

MG,MB,MC =

dimension of Gaussian, binary, and categorical original variables

N =

number of data points

n, m, k =

indices for N, M, K

P(·) =

probability

p, q =

indices for arbitrary design pair

r =

rank of low-rank matrix

S =

sparse matrix

si =

singular value

Ss(·) =

soft-threshold operator

t =

step index for proximal gradient

T =

transpose

u,v =

arbitrary vectors (used for proof)

U,Σ,V =

matrices of SVD decomposition

wmk =

weight parameter for RBM

xc =

customer

xd =

design

Xc =

all customers

Xd =

all designs

y =

all purchase design indicators

y(jpq) =

indicator variable of purchased design

Zm =

dimension of categorical variable

α =

l2 norm regularization parameter

β =

sparsity activation target for RBM

δ =

delta function

γ =

feature/original variable size ratio

η =

step size for proximal gradient

εjp =

gumbel random variable

ω,Ω =

part-worth coefficients of preference model: vector, matrix

λ1 =

nuclear norm regularization parameter

λ2 =

sparsity regularization parameter (LSD)

λ3 =

sparsity regularization parameter (RBM)

σ =

sigmoid function

θ =

parameters (generic)

· =

l2 norm

·1 =

l1 norm

·* =

nuclear norm

proxf(·) =

proximal operator for f

[·,·] =

vector concatenation

⊗ =

outer product

Appendix: Proof of Low-Rank Matrix Estimation Guarantee

Though the low-rank matrix is estimated jointly with the decomposed matrices as well as the loss function, an accurate estimation of the low-rank matrix can still be achieved as guaranteed by the bound as in this section. We subsequently provide a variational bound of the divergence of the estimated likelihood from the true likelihood.

To simplify the notation in our proof, we redefine the following notation:

Before our proof, however, we state the following relevant prepositions.

Proposition 1. The proximal operator for the nuclear normf=λ1||·||*is the singular value shrinkage operatorDλ1.

Consider the SVD of a matrix Xm×n with rank r
(A1)

where the soft-thresholding operator Sλ1(Σ)=diag({max(siλ1,0)}i=1,,min(m,n)). Moreover, Sλ2(·) is also the proximal operator for the l1 norm.

The matrix decomposition structure of our model builds on the separable sum property [49].

Proposition 2. Separable Sum Property. If f is separable across two variables x and y, i.e.,f(x,y)=f1(x)+f2(y), then
(A2)

Our proof proceeds as follows. Let us denote the optima of problem (9) as L*, the gradient of the loss function l(·) with respect to L* as L*l, and the matrix minimizing the loss function l(·) as L0.

We next prove the following theorems: Theorem 2 provides a tight bound on L*l. Corollary 1 bounds the estimation error for the learned matrix L*. Theorem 3 follows by bounding the divergence of likelihood from the true data distribution where l(·) is a likelihood function.

First, we make the weak assumption that the optimization problem given in Eq. (9) is strictly convex, since a necessary and sufficient condition is that the saddle points for l(·) and the regularization terms are not overlapping.

Theorem 2. Loss function gradient bound
Proof. Under the strictly convex assumption, the stationary point (i.e., the optima L* for the optimization problem (9)) is unique. By Lemma 1, iterations of the proximal gradient optimization method Lk converge to this optima L*. According to the fixed point equation for L (Algorithm 1), we have
(A3)
Denote L*ηL*l as M, representing the argument of the proximal operator at the optimal low-rank estimation. The SVD for L*, M, and proxf(M) yields
(A4)
(A5)
(A6)

where U, Uproxm×r; VT,VproxTr×n; and Σ,Σproxr×r with Σ=diag({si}i=1,...,r)andΣprox=diag({siprox}i=1,...,r). UMm×m,VMTn×n, and ΣM is an m × n rectangular diagonal matrix.

Without loss of generality, assume that s1>s2>>sr>0, i.e., these singular values are distinct and positive, thus ensuring column orderings are unique. Thus, we may assert that U=Uprox, V=Vprox, and Σ=Σprox due to the uniqueness of SVD for distinct singular values in L*=proxf(M).

According to Proposition 1
(A7)
Note that the dimensionality of proxf(M) is less than that of the value of M. To bridge the gap between them, we define diagonal submatrices Σ+M and ΣM. (In other words, we partition ΣM into two submatrices Σ+M and ΣM.) For all singular values siM of M, i=1,2,,min(m,n), if siMη0, then siM is a diagonal element of the submatrix Σ+M; otherwise, siM is a diagonal element of the submatrix ΣM. Hence, max(Σ+MηI,0)=Σ+MηI and max(ΣMηI,0)=0

where UM+(VM+) are left-singular (right-singular) vectors corresponding to Σ+M;UM and VM are also defined, respectively. Again, due to the uniqueness of SVD, we have UM+=U and VM+=V.

We now rewrite the SVD formula for prox(M) and M as
(A8)
(A9)
By definition of M
(A10)
Equations (A3) and (A8) indicate that
(A11)
(A12)
By Eqs. (A9), (A10), and (A12), we have
(A13)
Note that every diagonal element siM in ΣM satisfies 0iMη. Hence
(A14)
(A15)

where U:i or V:i is the ith column in matrix U or V, and [UM]:j or [VM]:j is the jth column in matrix UM or VM.

Summarizing the proof of Theorem 2, the gradient of the loss function at the estimated low-rank matrix is bounded by a unit ball within the original problem space that has radius of the low-rank regularization parameter λ1. The relaxation of the bound partially comes from the second term in inequality (A14). This implies that the bound is tighter if the rank of L* is increased.

Based on the gradient bound given in Theorem 2, we now bound the estimation error of the learned low-rank matrix L*. Although the value of the bound is not explicit in this proof, in some cases we are able to explicitly calculate its value.

Corollary 1. Learned Low-Rank Matrix Estimation Error. The error||L*L0||2is bounded by the diameter of minimum-sized ball that includes the following set:

Proof. The proof directly follows from Theorem 2 and the fact that L0l=0.

Since the loss function l(·) is convex, the Euclidean norm of its gradient Ll is nondecreasing as the Euclidean distance ||LL0||2 is increasing.

When the loss function is sharp around its minima, then {L:||Ll||2min(m,n)} is a small region which implies that L* is a good estimation of L0.

We next bound the likelihood divergence when the loss function l(·) is a likelihood function. To do this, we use Theorem 2 and Corollary 1 to construct a variational bound.

Theorem 3. Variational bound on estimated likelihood
Proof. By the Lagrangian mean value theorem, there exists L1{L:Lij[Lij*,L0ij]} such that
(A16)

where A,B denotes inner product of vec(A) and vec(B), in which vec(·) is the matrix vectorization operator.

Because of the convexity of l(·),||L1l||2||L*l||2. By Theorem 2
(A17)
(A18)

Summarizing the proof of Theorem 3, the variational bound of the estimated likelihood depends on both the bound of gradient of the likelihood function l(·) given in Theorem 2 and the property of the likelihood function in the neighborhood of its optima L0 as described in Corollary 1.

References

1.
Berkovec
,
J.
, and
Rust
,
J.
,
1985
, “
A Nested Logit Model of Automobile Holdings for One Vehicle Households
,”
Transp. Res. Part B: Methodol.
,
19
(
4
), pp.
275
285
.
2.
McFadden
,
D.
, and
Train
,
K.
,
2000
, “
Mixed MNL Models for Discrete Response
,”
J. Appl. Econometrics
,
15
(
5
), pp.
447
470
.
3.
Reid
,
T. N.
,
Frischknecht
,
B. D.
, and
Papalambros
,
P. Y.
,
2012
, “
Perceptual Attributes in Product Design: Fuel Economy and Silhouette-Based Perceived Environmental Friendliness Tradeoffs in Automotive Vehicle Design
,”
ASME J. Mech. Des.
,
134
(
4
), p.
041006
.
4.
Norman
,
D. A.
,
2007
,
Emotional Design: Why We Love (or Hate) Everyday Things
,
Basic Books
,
New York
.
5.
He
,
L.
,
Wang
,
M.
,
Chen
,
W.
, and
Conzelmann
,
G.
,
2014
, “
Incorporating Social Impact on New Product Adoption in Choice Modeling: A Case Study in Green Vehicles
,”
Transp. Res. Part D: Transp. Environ.
,
32
, pp.
421
434
.
6.
Sylcott
,
B.
,
Michalek
,
J. J.
, and
Cagan
,
J.
,
2013
, “
Towards Understanding the Role of Interaction Effects in Visual Conjoint Analysis
,”
ASME
Paper No. DETC2013-12622.
7.
Hinton
,
G.
,
Deng
,
L.
,
Yu
,
D.
,
Dahl
,
G. E.
,
Mohamed
,
A.-R.
,
Jaitly
,
N.
,
Senior
,
A.
,
Vanhoucke
,
V.
,
Nguyen
,
P.
,
Sainath
,
T. N.
, and
Kingsbury
,
B.
,
2012
, “
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups
,”
IEEE Signal Process. Mag.
,
29
(
6
), pp.
82
97
.
8.
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Hinton
,
G. E.
,
2012
, “
ImageNet Classification With Deep Convolutional Neural Networks
,”
Advances in Neural Information Processing Systems
, Vol.
25
, pp.
1097
1105
.
9.
Smolensky
,
P.
,
1986
, “
Information Processing in Dynamical Systems: Foundations of Harmony Theory
,”
Parallel Distributed Processing: Explorations in the Microstructure of Cognition
, Vol.
1
,
MIT Press
,
Cambridge, MA
, pp.
194
281
.
10.
Salakhutdinov
,
R.
,
Mnih
,
A.
, and
Hinton
,
G.
,
2007
, “
Restricted Boltzmann Machines for Collaborative Filtering
,”
24th International Conference on Machine Learning
, pp.
791
798
.
11.
Burnap
,
A.
,
Ren
,
Y.
,
Gerth
,
R.
,
Papazoglou
,
G.
,
Gonzalez
,
R.
, and
Papalambros
,
P. Y.
,
2015
, “
When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation
,”
ASME J. Mech. Des.
,
137
(
3
), p.
031101
.
12.
Panchal
,
J.
,
2015
, “
Using Crowds in Engineering Designâtowards a Holistic Framework
,”
2015 International Conference on Engineering Design, Design Society
, Milan, Italy, July 27–30.
13.
Chapelle
,
O.
, and
Harchaoui
,
Z.
,
2004
, “
A Machine Learning Approach to Conjoint Analysis
,”
Advances in Neural Information Processing Systems
, pp.
257
264
.
14.
Evgeniou
,
T.
,
Pontil
,
M.
, and
Toubia
,
O.
,
2007
, “
A Convex Optimization Approach to Modeling Consumer Heterogeneity in Conjoint Estimation
,”
Mark. Sci.
,
26
(
6
), pp.
805
818
.
15.
Lee
,
H.
,
Grosse
,
R.
,
Ranganath
,
R.
, and
Ng
,
A. Y.
,
2011
, “
Unsupervised Learning of Hierarchical Representations With Convolutional Deep Belief Networks
,”
Commun. Assoc. Comput. Mach.
,
54
(
10
), pp.
95
103
.
16.
Wassenaar
,
H. J.
, and
Chen
,
W.
,
2003
, “
An Approach to Decision-Based Design With Discrete Choice Analysis for Demand Modeling
,”
ASME J. Mech. Des.
,
125
(
3
), pp.
490
497
.
17.
Lewis
,
K. E.
,
Chen
,
W.
, and
Schmidt
,
L. C.
,
2006
,
Decision Making in Engineering Design
,
American Society of Mechanical Engineers
,
New York
.
18.
Michalek
,
J.
,
Feinberg
,
F.
, and
Papalambros
,
P.
,
2005
, “
Linking Marketing and Engineering Product Design Decisions Via Analytical Target Cascading
,”
J. Prod. Innovation Manage.
,
22
(
1
), pp.
42
62
.
19.
Chen
,
W.
,
Hoyle
,
C.
, and
Wassenaar
,
H. J.
,
2013
,
Decision-Based Design
,
Springer
,
London, UK
.
20.
Wassenaar
,
H.
,
Chen
,
W.
,
Cheng
,
J.
, and
Sudjianto
,
A.
,
2005
, “
Enhancing Discrete Choice Demand Modeling for Decision-Based Design
,”
ASME J. Mech. Des.
,
127
(
4
), pp.
514
523
.
21.
Kumar
,
D.
,
Hoyle
,
C.
,
Chen
,
W.
,
Wang
,
N.
,
Gomez-Levi
,
G.
, and
Koppelman
,
F.
,
2007
, “
Incorporating Customer Preferences and Market Trends in Vehicle Packaging Design
,”
ASME
Paper No. DETC2007-35520.
22.
Burnap
,
A.
,
Hartley
,
J.
,
Pan
,
Y.
,
Gonzalez
,
R.
, and
Papalambros
,
P. Y.
,
2015
, “
Balancing Design Freedom and Brand Recognition in the Evolution of Automotive Brand Styling
,”
Design Science
, Vol. 7,
Cambridge University Press
,
Cambridge, UK
, pp.
1
27
.
23.
Orsborn
,
S.
,
Cagan
,
J.
, and
Boatwright
,
P.
,
2009
, “
Quantifying Aesthetic Form Preference in a Utility Function
,”
ASME J. Mech. Des.
,
131
(
6
), p.
061001
.
24.
Morrow
,
W. R.
,
Long
,
M.
, and
MacDonald
,
E. F.
,
2014
, “
Market-System Design Optimization With Consider-Then-Choose Models
,”
ASME J. Mech. Des.
,
136
(
3
), p.
031003
.
25.
Ren
,
Y.
,
Burnap
,
A.
, and
Papalambros
,
P.
,
2013
, “
Quantification of Perceptual Design Attributes Using a Crowd
,”
19th International Conference on Engineering Design
(
ICED13
), Design for Harmonies: Design Information and Knowledge, Seoul, Korea, Vol.
6
, Paper No. DS 75-6.
26.
Toubia
,
O.
,
Simester
,
D. I.
,
Hauser
,
J. R.
, and
Dahan
,
E.
,
2003
, “
Fast Polyhedral Adaptive Conjoint Estimation
,”
Mark. Sci.
,
22
(
3
), pp.
273
303
.
27.
Abernethy
,
J.
,
Evgeniou
,
T.
,
Toubia
,
O.
, and
Vert
,
J.-P.
,
2008
, “
Eliciting Consumer Preferences Using Robust Adaptive Choice Questionnaires
,”
IEEE Trans. Knowl. Data Eng.
,
20
(
2
), pp.
145
155
.
28.
Tuarob
,
S.
, and
Tucker
,
C. S.
,
2015
, “
Automated Discovery of Lead Users and Latent Product Features by Mining Large Scale Social Media Networks
,”
ASME J. Mech. Des.
,
137
(
7
), p.
071402
.
29.
Van Horn
,
D.
, and
Lewis
,
K.
,
2015
, “
The Use of Analytics in the Design of Sociotechnical Products
,”
Artif. Intell. Eng. Des., Anal. Manuf.
,
29
(
1
), pp.
65
81
.
30.
Dym
,
C. L.
,
Agogino
,
A. M.
,
Eris
,
O.
,
Frey
,
D. D.
, and
Leifer
,
L. J.
,
2005
, “
Engineering Design Thinking, Teaching, and Learning
,”
J. Eng. Educ.
,
94
(
1
), pp.
103
120
.
31.
Friedman
,
J.
,
Hastie
,
T.
, and
Tibshirani
,
R.
,
2001
,
The Elements of Statistical Learning
, Vol.
1
,
Springer Series in Statistics, Springer
,
Berlin
.
32.
Gonzalez
,
R.
, and
Wu
,
G.
,
1999
, “
On the Shape of the Probability Weighting Function
,”
Cognit. Psychol.
,
38
(
1
), pp.
129
166
.
33.
Evgeniou
,
T.
,
Boussios
,
C.
, and
Zacharia
,
G.
,
2005
, “
Generalized Robust Conjoint Estimation
,”
Mark. Sci.
,
24
(
3
), pp.
415
429
.
34.
Hauser
,
J. R.
, and
Rao
,
V. R.
,
2004
, “
Conjoint Analysis, Related Modeling, and Applications
,”
Advances in Marketing Research: Progress and Prospects
, New York, pp.
141
168
.
35.
Papalambros
,
P. Y.
, and
Wilde
,
D. J.
,
2000
,
Principles of Optimal Design: Modeling and Computation
,
Cambridge University Press
,
Cambridge, UK
.
36.
Lenk
,
P. J.
,
DeSarbo
,
W. S.
,
Green
,
P. E.
, and
Young
,
M. R.
,
1996
, “
Hierarchical Bayes Conjoint Analysis: Recovery of Partworth Heterogeneity From Reduced Experimental Designs
,”
Mark. Sci.
,
15
(
2
), pp.
173
191
.
37.
Netzer
,
O.
,
Toubia
,
O.
,
Bradlow
,
E. T.
,
Dahan
,
E.
,
Evgeniou
,
T.
,
Feinberg
,
F. M.
,
Feit
,
E. M.
,
Hui
,
S. K.
,
Johnson
,
J.
,
Liechty
,
J. C.
,
Orlin
,
J. B.
, and
Rao
,
V. R.
,
2008
, “
Beyond Conjoint Analysis: Advances in Preference Measurement
,”
Mark. Lett.
,
19
(
3–4
), pp.
337
354
.
38.
Lee
,
H.
,
Ekanadham
,
C.
, and
Ng
,
A. Y.
,
2008
, “
Sparse Deep Belief Net Model for Visual Area V2
,”
Advances in Neural Information Processing Systems
, Vol.
20
, pp.
873
880
.
39.
Maritz Research
,
2007
, “
Maritz Research 2006 New Vehicle Customer Satisfactions Survey
,” http://www.maritz.com
40.
Chrome Systems
,
2008
, “
Chrome New Vehicle Database
,” http://www.chrome.com
41.
United States Census Bureau
,
2006
, “
2006 U.S. Census Estimates
,” http://www.census.gov
42.
Shocker
,
A. D.
,
Ben-Akiva
,
M.
,
Boccara
,
B.
, and
Nedungadi
,
P.
,
1991
, “
Consideration Set Influences on Consumer Decision-Making and Choice: Issues, Models, and Suggestions
,”
Mark. Lett.
,
2
(
3
), pp.
181
197
.
43.
Wang
,
M.
, and
Chen
,
W.
,
2015
, “
A Data-Driven Network Analysis Approach to Predicting Customer Choice Sets for Choice Modeling in Engineering Design
,”
ASME J. Mech. Des.
,
137
(
7
), p.
071410
.
44.
Von Neumann
,
J.
, and
Morgenstern
,
O.
,
2007
,
Theory of Games and Economic Behavior (60th Anniversary Commemorative Edition)
,
Princeton University Press
,
Princeton, NJ
.
45.
Fuge
,
M.
,
2015
, “
A Scalpel Not a Sword: On the Role of Statistical Tests in Design Cognition
,”
ASME
Paper No. DTM-46840.
46.
Fazel
,
M.
,
2002
, “
Matrix Rank Minimization With Applications
,”
Ph.D. thesis
, Stanford University, Stanford, CA.
47.
Tomioka
,
R.
,
Suzuki
,
T.
,
Sugiyama
,
M.
, and
Kashima
,
H.
,
2010
, “
A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices
,”
27th International Conference on Machine Learning
(
ICML-10
), pp.
1087
1094
.
48.
Liu
,
G.
, and
Yan
,
S.
,
2014
, “
Scalable Low-Rank Representation
,”
Low-Rank and Sparse Modeling for Visual Analysis
,
Y.
Fu
, ed.,
Springer International Publishing
,
Berlin
, pp.
39
60
.
49.
Parikh
,
N.
, and
Boyd
,
S.
,
2013
, “
Proximal Algorithms
,”
Found. Trends Optim.
,
1
(
3
), pp.
1
122
.
50.
Hinton
,
G. E.
,
2002
, “
Training Products of Experts by Minimizing Contrastive Divergence
,”
Neural Comput.
,
14
(
8
), pp.
1771
1800
.
51.
Bottou
,
L.
,
2010
, “
Large-Scale Machine Learning With Stochastic Gradient Descent
,”
COMPSTAT'2010
, Springer, Berlin, pp.
177
186
.
52.
van der Maaten
,
L.
,
2008
, “
Visualizing Data Using t-SNE
,”
J. Mach. Learn. Res.
,
9
, pp.
2579
2605
.