MONOTONIC NEURAL NETWORK: COMBINING DEEP LEARNING WITH DOMAIN KNOWLEDGE FOR CHILLER PLANTS ENERGY OPTIMIZATION

Abstract

In this paper, we are interested in building a domain knowledge based deep learning framework to solve the chiller plants energy optimization problems. Compared to the hotspot applications of deep learning (e.g. image classification and NLP), it is difficult to collect enormous data for deep network training in realworld physical systems. Most existing methods reduce the complex systems into linear model to facilitate the training on small samples. To tackle the small sample size problem, this paper considers domain knowledge in the structure and loss design of deep network to build a nonlinear model with lower redundancy function space. Specifically, the energy consumption estimation of most chillers can be physically viewed as an input-output monotonic problem. Thus, we can design a Neural Network with monotonic constraints to mimic the physical behavior of the system. We verify the proposed method in a cooling system of a data center, experimental results show the superiority of our framework in energy optimization compared to the existing ones.

1. INTRODUCTION

The demand for cooling in data centers, factories, malls, railway stations, airports and other buildings is rapidly increasing, as the global economy develops and the level of informatization improves. According to statistics from the International Energy Agency (IEA, 2018) , cooling energy consumption accounts for 20 of the total electricity used in buildings around the world today. Therefore, it is necessary to perform refined management of the cooling system to reduce energy consumption and improve energy utilization. Chiller plants are one of the main energy-consuming equipment of the cooling system. Due to the non-linear relationship between parameters and energy consumption, and performance changes due to time or age, deep learning is very suitable for modeling chiller plants. In recent years, deep learning (Goodfellow et al., 2016) research has made considerable progress, and algorithms have achieved impressive performance on tasks such as vision (Krizhevsky et al., 2012; He et al., 2016) , language (Mikolov et al., 2011; Devlin et al., 2018) , and speech (Hinton et al., 2012; Oord et al., 2016) , etc. Generally, their success relies on a large amount of labeled data, but real-world physical systems will make data collection limited, expensive, and low-quality due to security constraints, collection costs, and potential failures. Therefore, deep learning applications are extremely difficult to be deployed in real-world systems. There are some researches about few sample learning summarized from Lu et al. (2020) , which focusing on how to apply the knowledge learned in other tasks to few sample tasks, and applications in computer vision, natural language processing, speech and other tasks. Domain Knowledge that has been scientifically demonstrated, however, is more important in few sample learning tasks, especially in the application of physical system optimization. Domain knowledge can provide more derivable and demonstrable information, which is very helpful for physical system optimization tasks that lack samples. We discussed the method of machine learning algorithms combined with domain knowledge and its application in chiller energy optimization in this article. In particular, we propose a monotonic neural network (MNN), which can constrain the input-output of the chiller power model to conform to physical laws and provide accurate function space about chiller plants. Using MNN for system identification can help the subsequent optimization step and improve 1.5% the performance of optimization compared with the state-of-the-art methods.

2. BACKGROUND AND RELATED WORK

Chiller plantsfoot_0 energy optimization is an optimization problem of minimizing energy. In order to simplify the optimization process, the optimized system is usually assumed to be stable, which means that for each input of the system, the corresponding output is assumed to be time-independent. Mostly used methods are model-based optimization (MBOfoot_1 ) (Ma & Wang, 2009; Ma et al., 2011; Huang & Zuo, 2014) . Although Some research using Reinforcement learning model for optimal control (Wei et al., 2017; Li et al., 2019; Ahn & Park, 2020) . However, applying RL to the control of real-world physical systems will be caused by unexpected events, safety constraints, limited observations, and potentially expensive or even catastrophic failures Becomes complicated (Lazic et al., 2018) . MBO has been proven to be a feasible method to improve the operating efficiency of chillers, which uses chiller plants model to estimate the energy consumption with given control parameters under the predicted or measured cooling load and outside weather conditions. The optimization algorithm is then used to get the best value of the control parameter to minimize energy consumption (Malara et al., 2015) . The model can be a physics-based model or a machine learning model. Physics-based models are at the heart of today's engineering and science, however, it is hard to apply due to the complexity of the cooling system. Experts need to spend a lot of time modeling based on domain knowledge (Ma et al., 2008) . When the system changes (structure adjustment, equipment aging, replacement), it needs to be re-adapted. In recent years, the data-driven method has gradually become an optional solution. Its advantage lies in the self-learning ability based on historical data and the ability to adapt to changes. Thanks to its stability and efficiency, linear regression is the mostly used modeling method in real-world cooling system optimal control tasks (Zhang et al., 2011; Lazic et al., 2018) . But ordinary linear models cannot capture nonlinear relationships between parameters and energy consumption, and polynomial regression is very easy to overfit. With the remarkable progress of deep learning research, some studies apply it in cooling system (Gao, 2014; Evans & Gao, 2016; Malara et al., 2015) . Deep learning is very good at nonlinear relationship fitting, but it relies on a large amount of data and is highly nonlinear, which brings great difficulties to subsequent decision-making. Due to the inability to obtain a large amount of data, frontier studies have begun to consider the integration of domain knowledge into the progress of system identification and optimization (Vu et al., 2017; Karpatne et al., 2017; Muralidhar et al., 2018; Jia et al., 2020) . The combination methods made laudable progress, although it is still at a relatively early stage. In conclusion, reinforcement learning approach either requires a detailed system model for simulation or an actual system that can be tested repeatedly. The cooling system is too complex to simulate, the former is impossible. While in actual system design and implementation, the latter may be impractical. The MBO method has been proven to be feasible in optimal control, and the optimization performance is determined by the system identification model. However, physical model is complex and time-consuming, linear model in the machine learning model has poor fitting ability, neural network requires large scale datasets, and its highly nonlinearity is not conducive to subsequent optimization step. Domain knowledge can provide more knowledge for machine learning, in this article, we make a theoretical analysis and methodological description about the combination of domain knowledge and deep networks. In particular, we propose a monotonic neural network, which can capture operation logic of chiller. Compared with the above state of art method, MNN reduces the dependence on amount of data, provides a more accurate function space, facilitates subsequent optimization steps and improves optimization performance.

3. MACHINE LEARNING COMBINE WITH DOMAIN KNOWLEDGE

Consider a general machine learning problem, let us explain the method of machine learning from another angle. It is well known that the life cycle of machine learning modeling includes three key elements: Data, Model, and Optimal Algorithm. f * = arg min f ∈F R exp s.t. constraints (1) First, a function representation set is generated through a model. Then Under the information constraints of training datasets, the optimal function approximation is found in the function set through optimization strategies. Deep learning models have strong representation capabilities and a huge function space, which is a double-edged sword. In the case of few sample learning tasks, if we can use domain knowledge to give more precise function space, more clever optimization strategies, and more information injected into the training datasets. Then the function approximation to be solved will have higher accuracy and lower generalization error. Prior knowledge is relatively abstract and can be roughly summarized as: properties (Relational, range), Logical (constraints), Scientific (physical model, mathematical equation). Several methods of how domain knowledge can help machine learning are summarized in this paper, as follows: Scientific provides an accurate collection of functions. If the physical model is known but the parameters are unknown, machine learning parameter optimization algorithms and training samples can be used to optimally estimate the parameters of the physical model. This reduce the difficulty of modeling physical models. Incorporating Prior Domain Knowledge into data. The machine learning algorithm learns from data, so adding additional properties domain knowledge to the data will increase the upper limit of model performance, such as: Constructing features based on the correlation between properties; processing exceptions based on the legal range of properties; Data enrichment within the security of the system, etc. Incorporating Prior Domain Knowledge into optimal algorithm. The optimization goals in machine learning can be constructed according to performance targets. Therefore, logic constraints in domain knowledge that have an important impact on model performance can be added as a penalty to the optimization objective function. That will make the input and output of the model conform to the laws of physics, and improve the usability of the model in optimization tasks. Incorporating Prior Domain Knowledge into model. Another powerful aspect of deep learning is its flexible model construction capabilities. Using feature ranges and logical constraints of domain knowledge can guide the design of deep learning model structure, which can significantly reduce the search space of function structure and parameters, improve the usability of the model.

4. CHILLER PLANTS ENERGY OPTIMIZATION

This section will introduce the application of using the machine learning combine with domain knowledge to optimize the energy consumption of chiller plants. The algorithm model mentioned below has been actually applied to a cooling system of a real data center. We use model-based optimization method to optimize chiller plants. The first step is to identify the chiller plants. We decompose the chiller plants into three type models: cooling/chilled water pump power model, cooling tower power model, and chiller power model , see Equation 2. y = P CH + P CT + P COW P + P CHW P (2) 4.1 MODEL WITH SCIENTIFIC For the modeling of the cooling tower power and the cooling/chilled pump power, we know the physical model according to domain knowledge, that is, the input frequency and output power are cubic relationship (Dayarathna et al., 2015) , see Equation 3. y = f (x; θ) = P de • [θ 3 • (x/F de ) 3 + θ 2 • (x/F de ) 2 + θ 1 • (x/F de ) + θ 0 ] Where x is the input parameter: equipment operating frequency; P de is the rated power. F de is the rated frequency, which is a known parameter that needs to be obtained in advance. θ 0 , θ 1 , θ 2 , θ 3 is the model parameter that needs to be learned.

4.2. FEATURES WITH PROPERTIES

For the modeling of chiller power, we can integrate the relationship information between properties into the features to improve the fitting ability of the model by analyzing how the chiller plants work in appendix A.1. y CH ∝ T condenser , Q cooling loads (4a) T condenser ∝ T cow in , F cow pump (4b) T cow in ∝ T cow out , T wb , 1/F f an (4c) T cow out ∝ T condenser , T cow in , 1/F cow pump (4d) Q cooling loads ∝ (T chw in -T chw out ), Q chilled water f low (4e) Q chilled water f low ∝ F cow pump (4f) See Equation4 lists the causal relations between y CH and the variables on the cooling side and chilled side, and the correlation between variables. Because T cow in and T cow out is an autoregressive attribute related to time series, so it cannot be used as a feature. We will get features, list in Equation 5. x CH = (T wb , T chw out , T chw in , F cow pump , F f an , F chw pump )

4.3. OBJECTIVE FUNCTION WITH LOGIC

For the modeling of chiller power, we choose to use MLP as the power estimation model of chiller in the choice of model structure. The MLP model has the advantage to fit well on the nonlinear relationship between input and output. However, the estimated hyperplane of chiller power(c, f chiller (x)) has the bad characteristics of non-smooth and non-convex due to limited data and the highly nonlinearity of the neural network, resulting in the estimation hyperplane of total power, that will be optimized, (c, f total (x)) has multiple local minimum points, see figure 4 .1 . Moreover, the input and output of the model do not match the operating principle of the chiller from the performance curve. This brings great difficulties to the optimization steps later, which is why deep learning is rarely used in the control of real physical systems. The chiller plants have the following operating logic, such as the cooling tower fan increases the frequency, and will decrease the power of the chiller, etc. So the model's natural curvefoot_2 of parameters should be monotonous, see In Equation 6, we use the sigmoid function to map the difference between the power estimated label of the A sample and B sample into the probability estimate of y A > y B , and then use cross entropy to calculate the distance between the estimated probability distribution Sigmoid( ŷA -ŷB ) and the true probability distribution I(y A > y B ) as a penalty term. In Equation 7, when the estimated order of the label of A sample and B sample does not match the truth, we use the difference of the estimated label of the label of A sample and B sample as a penalty. Based on the addition of the above penalty items, the learning of the model can be constrained by physical laws, so that the natural curve of the model conforms to monotonicity, the effect See Figure 4 .3, and the estimated hyperplane is very smooth, and the optimized plane is also convex It is easy to use the convex optimization method to obtain the optimal control parameters. see Figure 4 .4. 

4.4. MODEL STRUCTURE WITH LOGIC

The former Section 4.3 describes the integration of logic constraints by adding penalty items to the loss function, so that the trained model conforms to the physical law of monotonic input and output. This section will describe how to use parameter constraints constraints(θ) and model structure design ḟ to further improve the model's compliance with physical laws and model accuracy. see Equation 8. y = ḟ (x, constraints(θ)), s.t. x-y satisfies Physical Law (8) Inspired by ICNN (Amos et al., 2017) , we designed a Monotonicity Neural Network, which gives the model the monotonicity of input and output through parameter constraints and model structure design, called hard-MNN. Corresponding to this is the model in the previous section that learns monotonicity through the objective and loss function called soft-MNN.

4.4.1. HARD-MNN

Model structure see Figure 4 .5. The main structure of the model is a multi-layer fully connected X X σ Z 0 Z k Z 2 Z 1 σ σ M + σ y ... W 0 z W 1 z W 2 z W k z W y W 1 x W 2 x W k x Z k-1 + + Figure 4 .5: hard-MNN. X is Input, y is Output, M is mask layer, Z i is hidden layer, W is weights: W x is passthrough layer weights, W z is main hidden layer weights. W y is output layer weights, σ is activate function, + is aggregation function. feedforward neural network, and the mask layer function 9 is added after the input layer to identify the monotonic direction of x i . If x i decreases monotonously, take the opposite number, otherwise it remains unchanged. f m (x) = -x if x ∈ Increase set x if x ∈ Increase set (9) In the model definition, we constrain the weight to be non-negative (W x ≥ 0, W y ≥ 0, W z ≥ 0). Combined with the mask layer, we can guarantee the physical laws of monotonically increasing or decreasing from the input to the output. Because the non-negative constraints on the weights are detrimental to the model fitting ability, a "pass-through" layer that connects the input layer to the hidden layer is added to the network structure to achieve better representation capabilities. There are generally two ways of aggregate function, plus or concate, which can be selected as appropriate, but the experimental results show that there is no significant difference. z i = W (z) i z i-1 + W (x) i x plus [W (z) i z i-1 ; W (x) i x ] concate (10) Similar to common ones are residual networks (He et al., 2016) and densely connected convolutional networks (Huang et al., 2017) , the difference is that they are connections between hidden layers. What needs to be considered is that the non-negative constraint of weights is also detrimental to the fitting ability of nonlinearity. It makes the model only have the fitting ability of exponential low-order monotonic functions. Therefore, some improvements have been made in the design of the activation function. Part of the physical system is an exponential monotonic function, but in order to improve the versatility of the model, we designed a parametric truncated rectified Linear Unit (PTRelu)11, which can improve the ability to fit higher-order monotonic functions . f σ (x) = min(α • sigmoid(βx), max(0, x)) α, β are hyperparameter or as learnable parameters, α is the upper bound value of the output of the activation function, and β determines the smoothness of the upper bound to ensure its high-order nonlinearity and weaken the gradient explosion. Input-output comparison of activation function see In addition, we extend the monotonic neural network to make it more general refer to (Amos et al., 2017; Chen et al., 2019) . Such as: partial monotonicity neural network in A.4, monotonicity recurrent neural network in A.5 etc. Adding each power model will get a total power model with convex properties, which is similar to ICNN. However, ICNN only guarantees the convex function properties of the objective function, which can facilitate the optimization solution but does not guarantee the compliance of the physical laws, nor the accuracy of the optimal value.

5. EXPERIMENTS

We evaluate the performance of MNN-based and MLP-based optimization methods in a large data center cooling system. Since the performance of MBO mainly depends on the quality of the basic model, we first compare the accuracy of the two system identification models. Then we compare the energy consumption of the two models under the same cooling load and external conditions. Comparison of model estimation accuracy. From figure 5 .1 we can know, the accuracy and stability of MNNs is better than MLP, because MNN provides a priori and more accurate function space. Comparison of energy consumption. Considering that energy consumption is not only related to interlnal control but also related to the external conditions (cooling load and outside weather), in order to ensure the rationality of the comparison, we make PUE comparisons at the same wet bulb temperature. As shown in figure 5 .2, hard-MNN is more energy-efficient, stable and finally reduces the average PUE by about 1.5% than MLP. We have summarized the symbols used in the article, see Tabel 2. There are two types of variables for data collection in the cooling system: control variables c and state variables s and powers. Control variables are parameters that can be manually adjusted, state variables are factors that are not subject to manual adjustment, but they all affect the energy consumption of the system. x is the input feature of models and y is the output target of models. θ represents the parameters of models. The symbols below represent actual variables in the cooling system. F cow pump , F f an are the control variables we want to optimize. T wb , T chw out , T chw in , F chw pump , T cow out , T cow in are environment variablesfoot_4 . P CH , P CT , P COW P , P CHW P are the power of each equipment in chiller plants.

A.3 OPTIMAL CONTROL

Chiller plants energy optimization is an optimization problem of minimizing energy. In order to simplify the optimization process, the optimized system is usually assumed to be stable, which means that for each input of the system, the corresponding output is assumed to be time-independent. Commonly used methods are model-free strategy optimization or model-based optimization. The strategy optimization method is to control according to the rules summarized by experience. The model-based optimization method has two steps, including system identification and optimization, see The first step is to model the system, that is, building mapping function f : x → y between features and energy consumption as shown in Equaltion 12, this step is usually done offline. In the second step, a constrained objective function is created based on the function of the first step, and then use the optimization algorithm to find the optimal value of the control parameter.The solved values will be sent to the controller of the cooling system, this step is usually performed online. 1.identif ication : y = f (x; θ) 2.optimization : x * = arg min x∈X f (x; θ), s.t. some constraints The modeling in the first step is the key step and the core content of this article, because it directly determines whether the implementation of optimization is troublesome, and indirectly determines the accuracy of the optimal value.

A.4 PARTIAL-MNN

Of course, when applied to other scenarios, the structure of hard-MNN is not applicable because the features may not conform to all x-y monotonicity, so we expand hard-mnn to partial-mnn, and the model structure see Figure A.3. The partial-MNN has one more branch network parts compared with hard-MNN, and the mask layer has also been modified. The partial mask layer, see Equation 13 is designed to identify monotonic decreasing, monotonic increasing and non-monotonic features. f m1 (x) =    0 non-Monotonic -x Decrease x Increase (13a) f m2 (x) = x f m1 (x) = 0 0 f m1 (x) = 0 (13b) The monotonic feature is input into the backbone network through the mapping of f m1 of the mask layer, x m = f m1 (x). Non-monotonic features are input into the branch network x n = f m2 (f m1 (x)) through the f m2 mapping of the mask layer. The branch network has no parameter constraints, uses the ordinary relu activation function, and merges with the backbone network at each layer, see A.5 MRNN MRNN replaces the main structure with RNN to support the modeling of timing-dependent systems, and increases the monotonicity of the timing dimension by constraining parameters compared to MNN. As we mentioned earlier, the cooling system is a dynamic system with time delay. In order to simplify the system, it is assumed that the system is a non-dynamic system. When the collected data granularity is dense enough, MRNN can be used to model the chiller plants. MRNN model structure see Figure A.4. In the model structure, we constrain part of the weight parameters to be non-negative (stU ≥ 0, V ≥ 0, W ≥ 0, D 1 ≥ 0, D 2 ≥ 0, D 3 ≥ 0) to ensure the monotonicity of input and output The performance and timing are monotonic, and a mask layer is added to the input layer. Use the Ptrelu activation function, and the output layer is Relu. D 1 , D 2 , D 3 are the weights of the pass through layer to improve the fitting ability of the network.  W 1 x W 2 x W k x X X m σ Z 0 Z k Z 2 Z 1 σ σ M + σ y ... W 0 z W 1 z W 2 z W k z W y W 1 x W 2 x W k x Z k-1 + + X n σ' U 0 U 2 U 1 σ' σ' ... W 0 u W 1 u W 2 u U k-1 + X t-1 X t X t+1 X t+2 X t-



How chiller plants work can see in appendix A.1. How MBO methods work can see in appendix A.3. The natural curve or called sensitivity curve: the change curve of y along a certain dimension of X. I is Indicator Function T chw out , F chw pump can also be controlled, but they will affect the energy consumption of AHUs. So in order to simplify the optimization process, no optimization control is performed on them.



Figure 4.1: natural curve.

bad natural curve of F f an . 30.0 32.5 35.0 37.5 40.0 42.5 45.0 47.5 bad natural curve of Fcow pump.

Figure 4.2: bad natural curve. Each curve is a sample

good natural curve of F f an . good natural curve of Fcow pump.

Figure 4.3: good natural curve.

Figure 4.4: good identification and optimization hyperplane.

Figure 4.6: PTRelu.

Figure 5.1: Boxplot of mape of MLP, hard-MNN and soft-MNN, which trained on real data collected from a cooling system of a DC. Each model has the same number of hidden layers and the number of neurons in each layer, as well as the same training set, test set, and features. The result is obtained after 100 non-repetitive tests.

Figure A.1: Cooling System Structure.

Figure A.2: Mobel based optimal control. Solid line is identification step, dotted line is optimization step.



Figure A.3: partial-MNN.

Figure A.4: MRNN.

The natural curve output by the vanilla MLP model does not conform to this rule, see Figure4.2.

Energy consumption comparsion in real system. MLP is hard to be used in real world system optimization due to highly nonlinear, so we use mlp with local PID for safe constraints.

Table of notations Symbol Description

