Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space

  • Wei Zheng ,

    zw3475@163.com

    Affiliation Key Laboratory for Optoelectronic Technology and System of the Education Ministry of China, College of Optoelectronic Engineering, Chongqing University, Chongqing, China

  • Xiaoya Zhang,

    Affiliation Key Laboratory for Optoelectronic Technology and System of the Education Ministry of China, College of Optoelectronic Engineering, Chongqing University, Chongqing, China

  • Qi Lu

    Affiliation Key Laboratory for Optoelectronic Technology and System of the Education Ministry of China, College of Optoelectronic Engineering, Chongqing University, Chongqing, China

Abstract

This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones.

Introduction

Safety accidents frequently occur in large-scale construction of underground space engineering [1, 2]. Safety monitoring of underground space is increasingly becoming a critical need for national economic development. However, monitoring data on underground space engineering involve large and complex features. Moreover, the existing visual tools are often based on specific measurement techniques. These techniques, such as photogrammetric, total station, and 3D laser scanning techniques, mainly focus on measurement data processing. In the process, such methods neglect the practicality of displaying risk levels. Thus, monitoring information on underground space safety and evaluating and forecasting its status are difficult, which in turn causes serious obstacles to underground space exploitation.

Some studies have been conducted on the visual technology of safety monitoring. For example, Smith and Brown have made some advances in directional borehole radar data analysis and visualization [3]. Tang has shown that the difference evolution arithmetic and visualization toolkit can be used to calculate and evaluate the state of tunnel surrounding rock [4]. A 3D laser scanning system has also been used to acquire and visualize monitoring data on underground space deformation [5, 6]. Chen has studied the browsing modes of the 3D simulation view in monitoring underground construction [7]. An approach has also been proposed for safety management of metro construction using 4D visualization technology [8]. Laser ultrasonic scanning excitation and integrated piezoelectric have been used in visualizing the defects in composite aircraft manufacturing and the damages of the debonding mode [9]. A wireless strain monitoring system, which integrates local tethered data acquisition and long-range wireless data transmission, has been developed for real-time strain monitoring and visualization of building safety [10]. Expanding on the original work in field construction by describing recent advances in both activity- and operation-level construction, Kamat et al. showed that graphical 3D visualization can serve as an effective communication method [11]. Moreover, bridge information modeling has become an effective tool in bridge engineering construction and visualization [12]. Continuous analytic techniques based on fracture mechanics and acoustic-emission analytics, along with software infrastructure, have been applied in real-time monitoring [13]. Glisic et al. researched and proposed the accessibility and visualization principles of heterogeneous monitoring data [14]. Other studies presented a general dynamic visualization model for SHM, which results in a dynamic and interactive visualization process [15]. A WSN monitoring framework based on 3D visualization [16] and a wireless data acquisition framework for structural health monitoring and control have also been presented [17].

Although various strategies for visualizing monitoring data have been developed, strategies for underground space safety remain a great challenge because of the following reasons: First, the underground space environment is highly complex, and numerous parameters affect safety monitoring [18, 19]. Most of the current methods are capable of handling only a limited number of parameters [20]. Second, the existing visualization techniques generally focus on a specific topography [21, 22]; research on the universal visualization model can still be expanded further. Third, the current methods have achieved mainly the visualization of monitoring data with specific physical characteristics, such as strain, temperature, or crack [2325]. However, only simple methods, such as the threshold partition, are adopted for abstract risk-level characteristics. Given these limitations, an intelligent, dynamic, and real-time visualization technique is urgently needed. The present study proposes a visualization technique for underground space deformation risk level based on a BP-Hopfield-RGB (BHR) composite network. Through parallel inputting of multiple environmental parameters, the method constructs a universal model for monitoring underground space safety. Complex environmental factors are integrated by the BP neural network (BPNN), and dynamic monitoring data are automatically classified in the Hopfield network. Combined with the RGB color space, the deformation risk level is displayed in real time. Thus, the dangerous zones in underground space are indicated and located quickly.

Method

This study establishes an ultrasonic spherical sensor device to detect omnidirectional deformation. The center of the sphere is considered the origin, and the ultrasonic transceiver arrays are localized on the surface of the sphere. A spatial 3D coordinate system is established, as shown in Fig 1(a). The ultrasonic spherical sensor array is used as a benchmark of spatial measurement to detect random structural deformation in underground space [26]. The designed device is shown in Fig 1(b).

The 3D space that is monitored through the ultrasonic array is organized and divided into a series of subspaces. These subspaces are distinguished from each other by identifying the ultrasonic sensors localized at the different latitude and longitude lines. The monitoring information for each direction in the 3D space is mapped into a latitude and longitude map to facilitate the visual management of the spatial monitoring information.

This study proposes a BHR composite network for visualizing deformation risk levels. The BHR composite network is composed of BPNN, Hopfield neural network, and RGB color space. The underground space deformation monitoring data are used as input in the BHR network to automatically classify and analyze the visualization of deformation risk levels.

Part 1. BPNN Data Processing

The basic idea of the BPNN is to iteratively learn a certain number of samples, that is, inputs and expected outputs, until the error between the predicted and expected outputs satisfies the setting accuracy. In the BPNN, the signal propagates forward and the error propagates backward. The network includes input, hidden, and output layers. The output of a neuron j on the input and output layers is determined by Eq (1) [27]: (1) where n, p, and qp represent the neuron number of the input, hidden, and output layers, respectively. The input sample component is represented by ei. The threshold value of the hidden layer is represented by θj. The connection weight between the input and hidden layers is represented by wij. The connection weight between the hidden and output layers is represented by vjt. The threshold value of the output layer is represented by λt. The transfer function is represented by f.

The tan-sigmoid function is used as the BPNN transfer function, given that it can limit the output within the range of [-1, 1], as shown in Eq (2). After inputting the actual measured data into the trained BPNN, the stable weights and thresholds are acquired [27].

(2)

Part 2. Hopfield Network Data Processing

A type of feedback network, the Hopfield network has more than one stable state. It begins at a certain initial state and then reaches a stable state, which can be stored in the network by setting the network weight. Two types of Hopfield network exist: continuous and discrete network. Letting uj and zi represent the function input and output of neuron j at the moment of t, respectively, the input and output are determined through Eqs (3) and (4) [28, 29] (3) (4) Where the threshold of neuron j is represented by δj. The number of neurons is represented by q. The connection weight between neurons i and j is represented by φij. yi The neuron input, that is, the actual output of the BPNN, is represented by yi.

The Hopfield network energy function is defined as [30]

(5)

When neuron j varies from time t to time t + 1, the energy variation of the neuron is as follows: (6)

When the state of neuron j changes, its energy variation is ΔEj≤0. Given that neuron j can be any one of the Hopfield network neurons, all the neurons of the network are in an updated state according to the same rules. Thus, the energy variation of the network should be ΔE ≤ 0.

The change in the network convergence involves an energy minimization process. Given that the energy function is bounded, the network reaches a steady state. This steady state is a discrete output of the Hopfield network, and the steady state condition is determined as follows: (7)

Whether or not the network reaches a steady state is determined according to Eq (7). If the steady state or the training number satisfies the requirement, then the training is ended; otherwise, it should return to its former state and continue.

Given the aforementioned features of the Hopfield network, this study uses the discrete network. The designed stable states are used as the different risk levels of a sensor array. Through the Hopfield network, the data learned by the BPNN reach a stable state, which indicates that the deformation state is maintained at a certain risk level. The output value of the sensor node in the discrete Hopfield network is -1 or 1. The value is -1 when the neuron stays at an inhibitory state and the deformation is at a low risk level, and 1 when the neuron stays at an active state and the deformation is at a high risk level.

Part 3. RGB Color Space

RGB color space is a 3D Cartesian space based on three variants: R, G, and B. The three vertices on the axis represent the three primary colors, namely, red, green, and blue. The different combinations of R, G, and B can form a total of about 16.78 million different colors. The different deformation risk levels can be distinguished by using the different colors formed by different combinations of RGB values.

Part 4. BHR Network Data Processing

Each output data of the BPNN are at an equilibrium state. Each data converge into its own equilibrium state in the Hopfield-RGB network. The stable equilibrium values are then converted into RGB values. For example, if the discrete Hopfield neural network has three neurons, then the network output consists of three binary numbers; thus, eight stable equilibrium states at the most are present. These eight stable equilibrium states correspond to the eight vertices of the RGB color space. Using Eq (8), the output of the Hopfield network can be mapped into the RGB color space.

(8)

The equilibrium states in the Hopfield network are converted into certain colors and displayed in real time on the information structure unit of the longitude—latitude mapping model. The Hopfield-RGB network mapping model (with three neurons) is shown in Fig 2.

Part 5. Visualization Model

The deformation risk level is divided into N levels based on the deformation value and environmental parameters. A higher N corresponds to a higher risk level and therefore greater danger. The different risk levels of deformation are represented by different colors to distinguish among them. Based on the above discussion on the BHR composite network, a visualization processing model is proposed for deformation risk levels in underground space, as shown in Fig 3.

thumbnail
Fig 3. BHR network-based visualization processing model for deformation risk levels.

https://doi.org/10.1371/journal.pone.0127088.g003

First, the real-time data monitored by the sensor array and environmental impact parameters are processed as a normalized parameter matrix, after which the matrix is inputted to the BPNN. Through the self-learning process of the BPNN, the stable weights and thresholds are acquired. Second, the Hopfield network is used to automatically classify the data learned by the BPNN. Finally, the deformation risk level classification is mapped into the color space and displayed in real time with different colors.

Results and Discussion

Model Experiment

The sensor nodes for the underground space deformation monitoring system are in different monitoring environments; thus, they have different measurement data and environmental parameters. This study considers three factors: object material, distance, and range difference.

  1. Object material: Given that the monitoring object of the ultrasonic omnidirectional array is underground space, the properties of the monitoring object may be different for the monitoring area of each sensor node, which affects the deformation risk level. Setting the impact factor as a decimal value, the maximum impact factor of the different materials is 1, and the minimum is 0.
  2. Distance: The distance between the monitored area and the sensor node affects the monitoring range and ultimately the deformation risk level. A greater distance corresponds to a higher deformation risk level. The maximum value of the parameter is set at 10,000 mm and the minimum value at 50 mm.
  3. Ranging difference: The ranging difference directly reflects the deformation. A larger difference means a greater deformation. The corresponding monitoring region has a high deformation risk level. The maximum value of the parameter is set at 50 mm and the minimum value at 3 mm, that is, the value for the quantitative accuracy of the sensor.

Visualization Experiment of a Single Sensor

The methods can be used for deformation monitoring of an underground structure such as an artificial tunnel or a natural cave. However, applying real deformation to the monitored structure, which is in a stable state, is impossible. Therefore, the proposed method is illustrated via experiments in the laboratory. Considering the three factors above, an experiment was conducted, as shown in Fig 4. Elastic pads were used to build a semi-closed structure. The structural deformation was simulated by exerting force on the elastic pads.

thumbnail
Fig 4. Experiment setup for acquiring the deformation risk level.

https://doi.org/10.1371/journal.pone.0127088.g004

First, the BPNN was trained using the data monitored by the sensor array and environmental impact parameters (the expected outputs are set as six risk levels). Then the structural deformation within the monitoring area of a single sensor node was measured. The pressure on the elastic deformable body was gradually increased, and the distances were acquired using the sensor. The deformation ranging differences were then acquired. Considering the parameter of the object material, a monitoring dataset was acquired as shown in Table 1.

thumbnail
Table 1. Deformation monitoring data of the sensor node located at longitude (E90) and latitude (0).

https://doi.org/10.1371/journal.pone.0127088.t001

A total of 20 sets of data at different time points formed a parameter matrix, which was then normalized. The normalized data were used as input in the trained BPNN, and the data were converged to the range of [-1, 1]. The output results of the BPNN are shown in Table 2.

Second, the output matrix of the BPNN was used as input in the Hopfield-RGB network. Different stable equilibrium points were presented in the network, and the data of the matrix were automatically classified into different risk levels. The experimental results of different types of risk levels are shown in Fig 5. Fig 5(a) displays the result graph with two deformation risk levels; Fig 5(b), with three deformation risk levels; Fig 5(c), with four deformation risk levels; and Fig 5(d), with six deformation risk levels. The data processed by the BPNN converged into some stable equilibrium points along with the solid colored lines.

thumbnail
Fig 5. Experimental result graphs of different types of risk levels.

https://doi.org/10.1371/journal.pone.0127088.g005

In Fig 5(a), the deformation risk levels were set to two. The stable equilibrium points of risk levels 2 to 1 were set to (1, -1, -1) and (-1, 1, -1), which correspond to red (the highest level) and green (the lowest level) in the color space, respectively. In Fig 5(b), the deformation risk levels were set to three. The stable equilibrium points of risk levels 3 to 1 were set to (1, -1, -1), (1, 1, 1), and (-1, 1, -1), which correspond to red (the highest level), white (replaced with purple in Fig 5 because white tracks cannot be seen clearly), and green (the lowest level) in the color space, respectively. In Fig 5(c), the deformation risk levels were set to four. The stable equilibrium points of risk levels 4 to 1 were set to (1, -1, -1), (-1, -1, -1), (1, 1, 1), and (-1, 1, 1), which correspond to red (the highest level), black, white (replaced with purple), and cyan (the lowest level) in the color space, respectively. In Fig 5(d), the deformation risk levels were set to six. The stable equilibrium points of risk levels 6 to 1 were set to (1, -1, -1), (1, -1, 1), (-1, -1, -1), (1, 1, 1), (-1, -1, 1), (-1, 1, -1), and (-1, 1, 1), which correspond to red (the highest level), carmine, black, white (replaced with purple), green, and cyan (the lowest level) in the color space, respectively.

Data Visualization Experiment of Omnidirectional Sensor Array

Different pressures were exerted on different areas on the elastic pads to visualize the sensor array monitoring data. The deformation data monitored by the sensor array were used as input in the trained visualization model of the deformation risk level. The visualization results are shown in Fig 6; the risk levels are divided into two, three, four, and six levels in Fig 6(a), 6(b), 6(c), and 6(d), respectively. The horizontal coordinates represent the latitude and longitude of the earth’s longitude—latitude mapping model, which were used to locate the sensor nodes. The dangerous zones were located afterward; different colors were used to represent different risk levels. The sensor array monitoring data were visualized, and the risk level values were displayed synchronously.

thumbnail
Fig 6. Visualization results of different types of deformation risk levels.

https://doi.org/10.1371/journal.pone.0127088.g006

Comparison with Other Algorithms

Part 1. Dataset introduction.

A standard dataset was used to compare the chosen algorithms. Different algorithms were used to process the dataset to select the suitable classification algorithm.

Given that this study focuses on underground space monitoring, a benchmark dataset for seismic bumps was selected [18]. Mining activities are always related to the occurrence of various forms of danger, which are commonly called mining hazards. The dataset described the problem of high-energy (higher than 104 J) seismic bump forecasting in a coal mine, and the data were obtained from two longwalls in a Polish coal mine. The dataset was a matrix with 2584 instances and 19 attributes. The present study selected three attributes, namely, genergy, gdenergy, and gdpuls, including a total of 1760 instances, in which 1600 instances were used for network training and 160 instances for network testing. The test results were then compared. The seismoacoustic attribute was also selected to calculate the accuracy rate of the classifier. The four selected attributes are described as follows:

  1. Genergy: the seismic energy recorded in the previous shift by the most active geophone (GMax) out of all the geophones that monitor the longwall.
  2. Gdenergy: a deviation of energy recorded within the previous shift by GMax from the average energy recorded in the eight previous shifts.
  3. Gdpuls: a deviation of a number of pulses recorded in the previous shift by GMax from the average number of pulses recorded in the eight previous shifts.
  4. Seismoacoustic: the result of the shift seismic hazard assessment in the mine obtained through the seismoacoustic method.

Part 2. Comparison of the pre-processing algorithms.

This study normalized the environmental impact parameters and the data monitored by ultrasonic omnidirectional sensors, which then formed a parameter matrix. Before it was classified, the parameter matrix was pre-processed, and a pretreatment algorithm was used to converge all the parameters into the range of [-1, 1]. Therefore, three classic algorithms—the BPNN, radial basis function neural network (RBFNN), and generalization regression neural network (GRNN)—were selected to pre-process the seismic bump dataset.

BPNN is known as the error backpropagation neural network, which is a typical multilayer forward neural network. In the network, the signal forwards transmission and the error backs propagation. Unlike that of the global network, the action function of the RBFNN is localized and can approximate any nonlinear function. The GRNN is a one-passing learning algorithm, which approximates any arbitrary function between the input and output vectors, thereby directly drawing the function estimation from the training data [31].

The experiment was conducted under the same conditions. The experimental hardware platform was Intel(R) Pentium(R) CPU 2.6 GHz (two CPUs) 4 GB RAM, and the software experimental platform was Microsoft Windows XP, C language. The three algorithms above were used to process the standard dataset. The compared items were execution time, running memory space, and mean squared error; the results are shown in Fig 7.

thumbnail
Fig 7. Comparison of the algorithms for pretreating the data.

https://doi.org/10.1371/journal.pone.0127088.g007

The comparison of the BPNN and RBFNN in Fig 8 shows that the mean square error and the memory of the running space have little difference. However, the BPNN uses less time than the RBFNN. The most important factor is that the BPNN also has a smaller mean squared error than the RBFNN and GRNN. Given that it integrates the performance of the three algorithms, BPNN is considered the most suitable algorithm for preprocessing data in this study.

Part 3. Classification algorithm comparison.

Three classic algorithms—the Hopfield neural network, k-nearest neighbor (KNN), and support vector machine (SVM)—were selected to classify the 160 sets of preprocessed data. The algorithms were evaluated in terms of execution time, running memory space, and classification accuracy. The classification results are shown in Fig 8.

The data are classified into two types, as shown in Fig 8. The green part represents seismoacoustic = 1, which indicates “lack of hazard.” The red part represents seismoacoustic = 2, which indicates “hazard.” The result of the Hopfield neural network is more concise and has better classification performance than other algorithms. A detailed comparison of the results of the three algorithms is shown in Fig 9.

The comparison of the Hopfield neural network and SVM algorithm shown in Fig 9 shows that the two algorithms have little difference in terms of running space memory and classification accuracy. However, the Hopfield network uses less time than the SVM algorithm. A comparison of the Hopfield neural network and KNN algorithm shows that the former has higher classification accuracy and uses less time than the latter. Given that it integrates the performance of the three algorithms, the Hopfield neural network is considered the most suitable classification algorithm in this study.

Comparison with Existing Techniques

The visualization technologies for safety monitoring are closely connected to measurement systems. Thus, the proposed method is compared with data processing systems on the basis of three techniques, namely, photogrammetric, total station, and 3D laser scanning techniques. These techniques have been widely used for deformation monitoring for the past decade. Specifically, photogrammetric technique places targets on the gallery vault. Useful information is then obtained from photographs captured by an optical camera [19]. In total station technique, an electronic theodolite is integrated with an electronic distance meter to read slope distances from the instrument to a particular point. Distance is measured by a modulated infrared carrier signal that is generated by a small, solid-state emitter within the optical path of the instrument and is reflected by either a prism reflector or the object under survey [20]. In 3D laser scanning technique, a 3D laser scanner is employed to scan the surface of the target object to obtain point clouds of either thousands or millions of coordinates with millimeter accuracy. The 3D profile can be constructed via data merging [6]. Performance parameters considered for comparison include accuracy, distance, display speed, anti-dusting capability, the need for manual assistance, capability to handle environmental parameters, adaptability in topography monitoring, and visualization results (Table 3).

thumbnail
Table 3. Comparison of visualization techniques on the basis of typical measurement systems.

https://doi.org/10.1371/journal.pone.0127088.t003

The comparison indicates that although the accuracy and distance of the proposed method are not ideal, this method has significant potential for use in the monitoring of structural safety given its quick display speed, anti-dusting capability, the lack of a need for manual assistance, capability to consider environmental parameters, adaptability in topography monitoring, and capability to display risk levels.

Conclusion

This study investigated the visualization model of the deformation risk levels of the ultrasonic omnidirectional array. Multiple environmental parameters were considered. The BPNN and Hopfield neural network were adopted for pre-processing and classifying the sensor data. The data processing results were mapped into RGB color space, which visualized the deformation risk levels of the sensor array. The visualization results facilitate the determination of the presence and location of danger. Experiments and comparison with other algorithms demonstrate that the method is characterized by intelligent, dynamic, and real-time features and can therefore be used as a universal model for safety monitoring in underground space.

Acknowledgments

The authors would like to thank Mr. Jingyu Jiang for establishing the experimental system and Mr. Chunxian Wu for discussions.

Author Contributions

Conceived and designed the experiments: WZ. Performed the experiments: QL. Analyzed the data: WZ XYZ QL. Contributed reagents/materials/analysis tools: WZ. Wrote the paper: WZ XYZ.

References

  1. 1. Yeung JS, Wong YD. Road traffic accidents in Singapore expressway tunnels. Tunn Undergr Sp Tech. 2013;38:534–41. WOS:000328234300052.
  2. 2. Silvestrini M, Genova B, Trujillo FJL. Energy concentration factor. A simple concept for the prediction of blast propagation in partially confined geometries. J Loss Prevent Proc. 2009;22(4):449–54. WOS:000274354000013.
  3. 3. Smith DV, Brown PJ. Advances in directional borehole radar data analysis and visualization. In: Koppenjan SK, Lee H, editors. Gpr 2002: Ninth International Conference on Ground Penetrating Radar. Proceedings of the Society of Photo-Optical Instrumentation Engineers (Spie). 4758. Bellingham: Spie-Int Soc Optical Engineering; 2002. p. 251–5.
  4. 4. Tang SL, Jiang AA, Zhao H. Study and Apply the Visualization Intelligent Feedback Analysis System of Tunnel Construction. In: Zhang H, Shen G, Jin D, editors. Advanced Research on Automation, Communication, Architectonics and Materials, Pts 1 and 2. Advanced Materials Research. 225–226. Stafa-Zurich: Trans Tech Publications Ltd; 2011. p. 26–30.
  5. 5. Wu SL, Deng HL, Chen KJ, Zhu MY, Huang DH, Fu SY. Visual monitoring technology of the tunnel 3D laser scanning and engineering applications. Adv Mater Res-Switz. 2013;779–780:463–8. WOS:000336106300093.
  6. 6. Kai C, Da Z, Sheng ZY. Development of a 3D laser scanning system for the cavity. International Conference on Optics in Precision Engineering and Nanotechnology (Icopen2013). 2013;8769:7. WOS:000323566900051. pmid:24663876
  7. 7. Chen LH, Liao FQ, Ye M. The Development and Application of 3D Monitoring Alarm Warning System in Tunnel Construction. In: Huang Y, editor. Advances in Civil and Structural Engineering Iii, Pts 1–4. Applied Mechanics and Materials. 501–504. Stafa-Zurich: Trans Tech Publications Ltd; 2014. p. 839–42.
  8. 8. Zhou Y, Ding LY, Chen LJ. Application of 4D visualization technology for safety management in metro construction. Autom Constr. 2013;34:25–36. WOS:000321092700004.
  9. 9. Chia CC, Jeong HM, Lee JR, Park G. Composite aircraft debonding visualization by laser ultrasonic scanning excitation and integrated piezoelectric sensing. Struct Control Hlth. 2012;19(7):605–20. WOS:000311396800006.
  10. 10. Ye XW, Ni YQ, Xia YX. Distributed Strain Sensor Networks for In-Construction Monitoring and Safety Evaluation of a High-Rise Building. Int J Distrib Sens N. 2012. Artn 685054 WOS:000309540300001.
  11. 11. Kamat VR, Martinez JC, Fischer M, Golparvar-Fard M, Pena-Mora F, Savarese S. Research in Visualization Techniques for Field Construction. J Constr Eng M Asce. 2011;137(10):853–62. WOS:000296507700021.
  12. 12. Marzouk MM, Hisham M, Al-Gahtani K. Applications of bridge information modeling in bridges life cycle. Smart Struct Syst. 2014;13(3):407–18. WOS:000336432800005.
  13. 13. Giangarra PP, Metrovich B, Schwitters MM, Semple BP. Smarter bridges through advanced structural health monitoring. Ibm J Res Dev. 2011;55(1–2). Artn 9 WOS:000301498300003.
  14. 14. Glisic B, Yarnold MT, Moon FL, Aktan AE. Advanced Visualization and Accessibility to Heterogeneous Monitoring Data. Comput-Aided Civ Inf. 2014;29(5):382–98. WOS:000334157400005.
  15. 15. Sun P, Wu ZY, Hua QY, Li ZH, Kang MN. DynaView: General Dynamic Visualization Model for SHM. Math Probl Eng. 2012. Artn 542501 WOS:000308495300001.
  16. 16. Koo B, Shon T. Implementation of a WSN-Based Structural Health Monitoring Architecture Using 3D and AR Mode. Ieice T Commun. 2010;E93b(11):2963–6. WOS:000284448600015.
  17. 17. Linderman LE, Mechitov KA, Spencer BF. TinyOS-based real-time wireless data acquisition framework for structural health monitoring and control. Struct Control Hlth. 2013;20(6):1007–20. WOS:000317421700011.
  18. 18. Sikora M, Wrobel L. Application of rule induction algorithms for analysis of data collected by seismic hazard monitoring systems in coal mines. Arch Min Sci. 2010;55(1):91–114.
  19. 19. Scaioni M, Barazzetti L, Giussani A, Previtali M, Roncoroni F, Alba MI. Photogrammetric techniques for monitoring tunnel deformation. Earth Sci Inform. 2014;7(2):83–95. WOS:000337040900003.
  20. 20. Lavine A, Gardner JN, Reneau SL. Total station geologic mapping: an innovative approach to analyzing surface-faulting hazards. Eng Geol. 2003;70(1–2):71–91. WOS:000185641200006.
  21. 21. Li XJ, Zhu HH. Modeling and Visualization of Underground Structures. J Comput Civil Eng. 2009;23(6):348–54. WOS:000270913900006.
  22. 22. Kim S, Maciejewski R, Malik A, Jang Y, Ebert DS, Isenberg T. Bristle Maps: A Multivariate Abstraction Technique for Geovisualization. Ieee T Vis Comput Gr. 2013;19(9):1438–54. WOS:000322027300002. pmid:23846090
  23. 23. Sousa H, Cavadas F, Henriques A, Bento J, Figueiras J. Bridge deflection evaluation using strain and rotation measurements. Smart Struct Syst. 2013;11(4):365–86. WOS:000324873900003.
  24. 24. Simbeye DS, Zhao JM, Yang SF. Design and deployment of wireless sensor networks for aquaculture monitoring and control based on virtual instruments. Comput Electron Agr. 2014;102:31–42. WOS:000334010400004.
  25. 25. Liang GL. Comprehensive Stability Analysis of High Rock Slope based on Safety Monitoring. Disaster Adv. 2012;5(4):312–20. WOS:000313100100050.
  26. 26. Zheng W, Li Y, Qian X, Lu P. An ultrasonic omni-directional sensor device for coverage structural deformation monitoring. Meas Sci Technol. 2014;25(3). WOS:000332698400037.
  27. 27. Benjamin CO, Chi SC, Gaber T, Riordan CA. Comparing Bp and Art-Ii Neural-Network Classifiers for Facility Location. Comput Ind Eng. 1995;28(1):43–50. WOS:A1995QE25900003.
  28. 28. Szedlak A, Paternostro G, Piermarocchi C. Control of Asymmetric Hopfield Networks and Application to Cancer Attractors. Plos One. 2014;9(8). ARTN e105842, WOS:000341127500071.
  29. 29. Akhmet M, Fen MO. Generation of cyclic/toroidal chaos by Hopfield neural networks. Neurocomputing. 2014;145:230–9. WOS:000342248100027.
  30. 30. Albertini MK, de Mello RF. Energy-based function to evaluate data stream clustering. Adv Data Anal Classi. 2013;7(4):435–64. WOS:000327869600005.
  31. 31. Heddam S. Generalized regression neural network (GRNN)-based approach for colored dissolved organic matter (CDOM) retrieval: case study of Connecticut River at Middle Haddam Station, USA. Environmental monitoring and assessment. 2014;186(11):7837–48. pmid:25112840.