Using Neural Networks to Forecast the Steel Amount of Ductile Steel Frames

碩士 === 義守大學 === 土木與生態工程學系碩士班 === 99 === The main purpose of the paper is to use neural networks to forecast the steel amount of ductile steel frames. First, apply the popular structural analysis and design software of SAP2000 to create 84 models. Using the AISC-LRFD99 as the design code, SAP2000 can...

Full description

Bibliographic Details
Main Authors: Yi-Min Huang, 黃一民
Other Authors: Jiin-Po Yeh
Format: Others
Language:zh-TW
Published: 2011
Online Access:http://ndltd.ncl.edu.tw/handle/08782291595030202602
Description
Summary:碩士 === 義守大學 === 土木與生態工程學系碩士班 === 99 === The main purpose of the paper is to use neural networks to forecast the steel amount of ductile steel frames. First, apply the popular structural analysis and design software of SAP2000 to create 84 models. Using the AISC-LRFD99 as the design code, SAP2000 can find the most appropriate H-shape steels by its “Auto Select” function. The total steel amount of the ductile steel frames is therefore calculated. There are totally 84 sets of results. These data will be used as the input (the number and height of stories, number of bays, bay width, the horizontal earthquake force and the intensity of vertically distributed load) and target values (steel amount) of neural networks. To train the network, the 84 sets of data is divided into three groups: the training data, validation data and testing data. Secondly, use the MATLAB computer software toolbox to build neural networks. Two kinds of networks are applied: the feedforward back-propagation neural network and the radial basis network. This thesis uses the two-layer back-propagation neural network: the first layer is the hidden layer with several neurons, and the second layer is the output layer with only one neuron. The transfer function of the hidden layer is the Tan-sigmoid function, and the transfer function of the second layer is the purelin function. This thesis also use the automated regularization “trainbr” function provided by the MATLAB to obtain the effective number of parameters. Accordingly the number of neurons in the hidden layer can be derived. Then, consider the Levenberg-Marquardt algorithm as a training function. Use the effective number of neurons obtained from the “trainbr” function as well as the number of input variables and its multiples as the number of hidden layer neurons to train the network and then compare the results. In order to make the network more generalized and avoid the over-fitting problem, besides the training data, the validation data is also monitored during the training process. Numerical results show that the correlation coefficient of the network output and target value of ranges from 0.989 to 0.996,which is very effective. After that, the radial basis network is employed. There are two kinds of design functions: newrb and newrbe. The newrb design function is first tried. Different values of mean square error (0.02 to 3) and different SPREAD values (1.4 to 2.2)are tested. It’s found that with smaller mean square error and SPREAD value about 2.0, the efficiency is very good and close to the back-propagation network. The defect is that more neurons are required. Then use the newrbe design function. Because the default value of mean square is set to be zero, it only needs to find the most appropriate value of SPREAD, which is found to be about 1.4. Due to the over-fitting of the network, the correlation coefficient of this network output value and target value is only 0.645. The newrbe is the poorest compared with the newrb and the back-propagation networks.