SEVENTH FRAMEWORK PROGRAMME Capacities Specific Programme Research Infrastructures Project No.: 227887 SERIES SEISMIC ENGINEERING RESEARCH INFRASTRUCTURES FOR EUROPEAN SYNERGIES Deliverable D13.1 covering Tasks JRA 2.1 and JRA 2.2 Work package [WP13/JRA2] Deliverable [D13.1] - [Report on advanced sensors, vision systems and control techniques for measuring structural/foundation response, improving test control and hybrid testing. Dissemination of sensor and vision systems to partner infrastructures not directly involved in their development or application] Deliverable/Editor: [CEA, UNITN] Reviewer: [UNITN] Revision: Final May, 2011
218
Embed
SEVENTH FRAMEWORK PROGRAMME Research Infrastructures ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
SEVENTH FRAMEWORK PROGRAMME
Capacities Specific Programme Research Infrastructures
Project No.: 227887
SERIES SEISMIC ENGINEERING RESEARCH INFRASTRUCTURES FOR
EUROPEAN SYNERGIES
Deliverable D13.1 covering Tasks JRA 2.1 and JRA 2.2
Work package [WP13/JRA2] Deliverable [D13.1] - [Report on advanced sensors, vision systems and control techniques for measuring structural/foundation response, improving test control and hybrid testing.
Dissemination of sensor and vision systems to partner infrastructures not directly involved in their development or application]
2.6 TT2 test: non linear hydraulic Actuator model ............................................................51
2.6.1 Actuator model ................................................................................................52
2.6.1.1 The Merritt servohydraulic model...................................................... 52 2.6.1.1.1 Fluids mechanics equations ........................................................ 52 2.6.1.1.2 Servohydraulic System ................................................................ 54 2.6.1.1.3 Final Merritt model equations ..................................................... 56
2.6.1.2 The modified Actuator Model ............................................................ 57 2.6.1.2.1 Flows decomposition for actuator modeling .............................. 58 2.6.1.2.2 Force equation on Piston ............................................................ 61 2.6.1.2.3 Servovalve equations .................................................................. 63 2.6.1.2.4 governing equations of the modified actuator model ............... 64
2.6.2.1 Identification tests .............................................................................. 68 2.6.2.1.1 Identification test n° 1: no velocity test ...................................... 68 2.6.2.1.2 Identification test n° 2: no load flow test .................................... 71 2.6.2.1.3 Identification test n° 3: Sine sweep test ..................................... 74
2.6.2.2 Results: model vs test ......................................................................... 76 2.6.2.2.1 Step test ....................................................................................... 77 2.6.2.2.2 Sine sweep test ............................................................................ 77 2.6.2.2.3 White noise test ........................................................................... 79 2.6.2.2.4 Conclusion ................................................................................... 79
3.5.4.1 Targets networks and artificial texture on the bridge ..................... 143 3.5.4.2 Tracking method and image matching ............................................ 145
3.5.5 PsD methodology: an example of stereo-vision measurements on the Future
3.5.5.1 Description of the experiment ......................................................... 148 3.5.5.2 Strong floor displacements .............................................................. 150 3.5.5.3 General drift of the beam ................................................................. 153 3.5.5.4 Opening and sliding between slab and sandwich ............................ 154 3.5.5.5 Shell buckling.................................................................................... 159
3.5.6 On some real time displacement measurements .........................................161
3.5.7 Shake table methodology: recent Research Efforts in using photogrammetry163
3.5.8 Commercial Integrated Systems ...................................................................164
3.5.9 Hardware Components for photogrammetry on shake table experiments 165
3.5.10 Photogrammetric System Configuration .....................................................169
3.5.11 Software development ..................................................................................171
3.5.11.1 Stereoscopic video capture .............................................................. 171 3.5.11.2 Stereoscopic video play-back .......................................................... 173 3.5.11.3 Camera calibration ........................................................................... 174 3.5.11.4 Target tracking and Triangulation ................................................... 177
3.5.12 Shake table methodology: an example of photogrammetry on the
3.5.12.1 Presentation/Context ....................................................................... 180 3.5.12.2 Equipment ........................................................................................ 180 3.5.12.3 Stereovision system evaluation: test on a rocking and sliding block 183 3.5.12.4 Using the stereovision system during shaking table tests: drums stacked on AZALEE table ................................................................................... 189
Fig. 2. 1 Schematic representation of a 2-DoF structure with substructuring: (a) emulated structure; (b) partitioned structure; (c) numerical substructure; and (d) physical substructure and transfer system. ............................................................................................. 19 Fig. 2. 2 Spectral radii ñ of real-time complatible integration methods vs non-dimensional frequency Ù.................................................................................................................................. 21 Fig. 2. 3 The GC method ............................................................................................................ 25 Fig. 2. 4 The interfield parallel solution procedure of the PM method.................................. 26 Fig. 2. 5 The multi-time-step partitioned algorithm with ss=2: (a) staggered procedure; (b) interfield parallel procedure. ............................................................................................... 27 Fig. 2. 6 The solution procedure of the improved interfield parallel algorithm. ................ 28 Fig. 2. 7 Comparison of test results between different partitioned methods. ........................ 29 Fig. 2. 8 Schematic for real-time substructuring tests ............................................................. 32 Fig. 2. 9 Block diagram for real-time substructuring tests ..................................................... 32 Fig. 2. 10 Substructuring test with a model of the physical specimen to improve the test characteristics (After Sivaselvan, 2006). ................................................................................... 34 Fig. 2. 11 Adaptive model of the physical specimen for palliating lack of knowledge (After Sivaselvan, 2006) ......................................................................................................................... 36 Fig. 2. 12 Block diagram of IMC ............................................................................................... 39 Fig. 2. 13 Two-degree of freedom IMC ..................................................................................... 40 Fig. 2. 14 Model reference adaptive inverse control system ................................................... 40 Fig. 2. 15 IMC applications in the actuators of the TT1 test rig ............................................ 41 Fig. 2. 16 Basic structure of MPC .............................................................................................. 43 Fig. 2. 17 MPC strategy .............................................................................................................. 44 Fig. 2. 18 Schematic of real-time test......................................................................................... 45 Fig. 2. 19 The scheme of open-loop inverse-dynamics control ................................................ 47 Fig. 2. 20 The scheme of parallel model-reference adaptive control ...................................... 49 Fig. 2. 21 The control block diagram for inverse dynamics + adaptive controllers ............. 50 Fig. 2. 22 Flows entering and leaving a control volume (Merrit, 1967) ................................. 53 Fig. 2. 23 Valve piston combination (Merrit 1967) .................................................................. 54 Fig. 2. 24 Flows in actuator ........................................................................................................ 58 Fig. 2. 25 Stiffness and pusation normalized variations .......................................................... 60 Fig. 2. 26 Forces acting on the piston ........................................................................................ 61 Fig. 2. 27 Stribeck model curve for friction forces variation depending on velocity (Jellali and Kroll, 2003) ........................................................................................................................... 62 Fig. 2. 28 Experimental setup..................................................................................................... 66 Fig. 2. 29 Drawing of the experimental servo-hydraulic setup ............................................... 66 Fig. 2. 30 Sensors of the actuator ............................................................................................... 67 Fig. 2. 31 Variations of displacement, drive and pressure depending on time are not significant ..................................................................................................................................... 68 Fig. 2. 32 Force (from load cell) depending on differential pressure ..................................... 69 Fig. 2. 33 No velocity flow depending on pressure ................................................................... 70 Fig. 2. 34 No velocity flow experimental and fitted curves...................................................... 70
12
Fig. 2. 35 Charge loss coefficient 𝑲𝒄𝒆 ....................................................................................... 71 Fig. 2. 36 Flow depending onpiston velocity ............................................................................. 72 Fig. 2. 37 Friction force depending on velocity ........................................................................ 73 Fig. 2. 38 No load flow depending on drive............................................................................... 73 Fig. 2. 39 Non-linearities appearing on first servovalve (left) with an overlap and the second servovalve (right) with an underlap .......................................................................................... 74 Fig. 2. 40 Actuator with a rigid mass ........................................................................................ 75 Fig. 2. 41 Oil stiffness evaluation, sine sweep test .................................................................... 75 Fig. 2. 42 Drive signal of the reference test ............................................................................... 76 Fig. 2. 43 Model vs test velocity, reference test ........................................................................ 76 Fig. 2. 44 Model vs test velocity, step test.................................................................................. 77 Fig. 2. 45 Model vs test velocity, step test, zoom ...................................................................... 77 Fig. 2. 46 Model vs test velocity, 0.1 Hz and 10 Hz sinus test.................................................. 77 Fig. 2. 47 Model vs test velocity, 18 Hz and 40 Hz sinus test................................................... 78 Fig. 2. 48 Model vs test velocity, 60 Hz and 80 Hz sinus test................................................... 78 Fig. 2. 49 Model vs test velocity, 100 Hz and 120 Hz sinus test............................................... 78 Fig. 2. 50 Model vs test velocity, white noise test ..................................................................... 79 Fig. 2. 51 Model vs test acceleration, 15 Hz sinus test, zoom .................................................. 79 Fig. 2. 52 Schematic of seating deck frame with actuators and air springs........................... 84 Fig. 2. 53 Control loops for motion of seating deck ................................................................. 87 Fig. 2. 54 Encoder displacement of an actuator on the grandstand (upper), detail showing data points and glitch (lower) .................................................................................................... 88 Fig. 2. 55 Load cell output for actuator load cell – whole trace (upper), detail (lower) ....... 89 Fig. 2. 56 Power spectral density of load cell signal ................................................................. 90 Fig. 2. 57 Load cell signal from spectator cell – detail trace (upper) and psd (lower).......... 90 Fig. 3. 1 cyclic test N. 4: specimen cross-section (dimensions in mm) .................................... 96 Fig. 3. 2 Four load points scheme (dimensions in mm)............................................................ 96 Fig. 3. 3 Cyclic test N.4: top side internal vs external fiber data ............................................ 97 Fig. 3. 4 Cyclic test N.4: bottom side internal vs external fiber data ..................................... 97 Fig. 3. 5 Cyclic test N.4: moment-rotation curve ..................................................................... 98 Fig. 3. 6 Cyclic test N.4: Comparison between AEPs, strain gauges and fiber optic sensors........................................................................................................................................................ 98 Fig. 3. 7 Full scale test set-up of the tunnel ring (dimensions in cm)...................................... 99 Fig. 3. 8 Full scale test set-up of the tunnel ring (dimensions in cm).................................... 100 Fig. 3. 9 Comparison between actuator inner displacement and wire 2-6........................... 100 Fig. 3. 10 External unbounded AOS fiber data in Section 1 for the pre-straining phase. . 101 Fig. 3. 11 Inner bonded AOS fiber data in Section 2 during the ECCS phase. ................... 101 Fig. 3. 12 External unbounded AOS fiber data in Section 8 for the ECCS phase. ............. 102 Fig. 3. 13 Functional elements of a wireless sensor for structural monitoring applications..................................................................................................................................................... 109 Fig. 3. 14 Wireless network typologies for wireless sensor networks ................................... 110 Fig. 3. 15 A Base Station and a MOTE Unit ........................................................................... 118 Fig. 3. 16 Testing Scheme ......................................................................................................... 118 Fig. 3. 17 Laboratory Test Layout ........................................................................................... 119 Fig. 3.18 Sensors arrangements; (a) Tests on X axis, (b) Tests on Y axis, (c) Tests on Z axis..................................................................................................................................................... 120 Fig. 3.19 (a) Fitted time histories of the sample test using test’s parameters;..................... 121
13
Fig. 3.20 Steel/aluminium frame placed on the shaking table instrumented with accelerometers ........................................................................................................................... 122 Fig. 3.21 Earthquake simulation fitted and synchronized time histories ............................ 123 Fig. 3.22 The wireless nodes to be tested (the nodes are packaged in plastic boxes of dimension 11x8x4cm, a 19cm high antenna; the weight of a sensor is 150g). ..................... 124 Fig. 3.23 Testing scheme with the wired and wireless strain gauges. ................................... 124 Fig. 3.24 (a) Strain gauges mounted in the bare bars; (b) strain gauges mounted in the bar in the concrete. .......................................................................................................................... 124 Fig. 3.25 Strain measured by wired and wireless strain gauges ........................................... 125 Fig. 3. 26 The Swissranger® SR4000 range camera .............................................................. 139 Fig. 3. 27 Calibration of the stereo rig .................................................................................... 141 Fig. 3. 28 Optical distortion of the right camera .................................................................... 142 Fig. 3. 29 a) close-up view of the random texture of the bridge, b) corresponding window on left camera c) corresponding window on right camera ......................................................... 143 Fig. 3. 30 Synopsis of the tracking method ............................................................................. 144 Fig. 3. 31 Illustration of the matching method ....................................................................... 145 Fig. 3. 32 Perspective view of the bridge ................................................................................. 148 Fig. 3. 33 Right view of the beam, with some measurement points and LVDT available for comparison................................................................................................................................. 151 Fig. 3. 34 Evidence of the floor displacement ......................................................................... 152 Fig. 3. 35 Evolution of the slope of the floor at points 13 and 17 .......................................... 153 Fig. 3. 36 Drifting of the bridge longitudinal to its axis (a) and perpendicular to it (b) for points 1, 41 9 and 11 .................................................................................................................. 154 Fig. 3. 37 right view of the concrete slab with targets indicated by red crosses. Cyan crosses correspond to sandwich and green ones to FRP .................................................................... 154 Fig. 3. 38 Left and right views of the LVDT 22. The profile of the lever is delineated on the left view ...................................................................................................................................... 155 Fig. 3. 39 Signal of the LVDT 22, compared to distance between targets 77 and 569, on its extremities. The green curves corresponds to sliding as measured from target 417 .......... 155 Fig. 3. 40 In a is exhibited the sliding profile of the concrete slab with respect to sandwich panel, at the successive loading maxima. In b is shown the corresponding opening .......... 157 Fig. 3. 41 Close up view of the green rectangle in Fig. 3. 9 (right view), for b) initial time, to be compared with a) and c). For c) the concrete slab has been registered to its initial state, so that relative displacement of targets on Sandwich panel and FRP are evidenced ......... 158 Fig. 3. 42 Perspective views of the surface of reference (black) and of its displacement at time step 2069 (red). A bulge and a declivity can be seen on the red surface, with respect to the reference one ....................................................................................................................... 160 Fig. 3. 43 The difference between out of plane displacement for time steps 2071 and 2069 reveals the shell buckling ......................................................................................................... 161 Fig. 3. 44 a) Experimental set-up, the actuator loading the damper is clearly visible on the right side of the photo. The camera on the left partially hide the damper in the back-ground, that is vertically loaded by a square plate and 4 Dividags. b) A detail of the piston on which the tracked target is stuck ........................................................................................ 162 Fig. 3. 45 a) comparison of optical results (green) with Heidenhain (red) and Temposonics (blue); b) difference between Heidenhain and optical methods ........................................... 162 Fig. 3. 46 a) longitudinal and lateral displacements; b) cycles.............................................. 163 Fig. 3. 47 Rolling Shutter and global shutter video capture ................................................. 167 Fig. 3. 48 Configuration of vision system developed at LEE/NTUA .................................... 171
14
Fig. 3. 49 Software for stereoscopic video capture developed at LEE/NTUA ..................... 173 Fig. 3. 50 Stereoscopic video play-back of the system developed at LEE/NTUA................ 173 Fig. 3. 51 Indicative camera positions for camera calibration .............................................. 174 Fig. 3. 52 Camera calibration software developed at LEE/NTUA ....................................... 175 Fig. 3. 53 Template and actual (captured) target ................................................................... 177 Fig. 3. 54 Targets on specimen at LEE/NTUA ....................................................................... 179 Fig. 3. 55 Trajectory along X axes (displacement in meters) for the experiment performed at LEE/NTUA ............................................................................................................................ 179 Fig. 3. 56 Carbon arm drawing................................................................................................ 181 Fig. 3. 57 Different pictures of the carbon arm ...................................................................... 181 Fig. 3. 58 VIDEOMETRIC target ........................................................................................... 182 Fig. 3. 59 Left and right images of stereovision system ......................................................... 183 Fig. 3. 60 Test rig for stereovision system evaluation ............................................................ 184 Fig. 3. 61 A theoretical Gaussian distribution with µ, mean value and σ, standard deviation ..................................................................................................................................... 185 Fig. 3. 62 Histogram “number of errors” versus “deviation from mean value” for 6 targets (error = deviation from mean value) ....................................................................................... 185 Fig. 3. 63 VIDEOMETRIC results for different check tests ................................................. 186 Fig. 3. 64 VIDEOMETRIC results for different tests............................................................ 187 Fig. 3. 65 VIDEOMETRIC results quality (measurement noise) ......................................... 188 Fig. 3. 66 Concrete floor with epoxy coating .......................................................................... 189 Fig. 3. 67 Drums stack on AZALEE table (top view) ............................................................ 189 Fig. 3. 68 Examples of accelerograms for drums stacks seismic tests .................................. 191 Fig. 3. 69 A typical 3 pallets and 3x4 drums on AZALEE .................................................... 192 Fig. 3. 70 Drums stacks testing instrumentation .................................................................... 193 Fig. 3. 71 Instrumentation implementation on drums stacks ............................................... 193 Fig. 3. 72 VIDEOMETRIC targets fixe on mock up ............................................................. 194 Fig. 3. 73 Comparisons of VIDEOMETRIC and LVDT sensors measurements for shaking table ............................................................................................................................................ 195 Fig. 3. 74 Comparisons of VIDEOMETRIC and LVDT sensors measurements for top drum ........................................................................................................................................... 196 Fig. 3. 75 VIDEOMETRIC measurements for pallets .......................................................... 197 Fig. 3. 76 VIDEOMETRIC measurements for top drums .................................................... 198 Fig. 3. 77 Thermal images from a fatigue test to failure on a yielding shear panel dissipative device .......................................................................................................................................... 200 Fig. 3. 78 Thermal images from tests on short beam sections............................................... 203 Fig. 3. 79 Plastic strain distributions deduced from thermal images for the beam pictured in Fig. 3. 60 ................................................................................................................................. 205
15
List of Tables
Table 3. 1 Maximum deformations for each instrumented section and comparison with εy of longitudinal reinforcing bars. ...................................................................................................... 102
16
1 Study Overview
The main objective of JRA2 is the implementation and application of new types of
sensors, control techniques and modelling tools capable of enhancing the measurement of the
response of test specimens and improving the quality of test control. The activity also aims at
developing numerical simulation tools, integrated with data processing, databases and
visualisation, for an improved design of test campaigns, including the equipment and for
enhanced interpretation of experimental results. In more detail, the following objective both in
Task JRA2.1 and Task JRA2.2 is pursued:
– implementation and application of new types of sensors for improved sensing and
control. Specifically, new types of instrumentation -wireless, fibre optics and 3D visualization
tools based on several individual sensor measurements or digital video-photogrammetry- and
techniques for measuring structural and foundation response -point and field, local and global
kinematic measurements, etc.- will be explored. Experiments at different levels of complexity
will be carried out to calibrate/validate the proposed instrumentation and techniques.
17
State-of-the-art report for JRA2
18
2 JRA 2.1 Advanced Sensing and Control Techniques for Improved Testing Control
2.1 INTEGRATION METHODS
2.1.1 Introduction
Hybrid Simulation (Saouma Sivaselvan Editors, 2008) or hererogenrous testing (Bursi O. S. and
Wagg D. Editors, 2008), i.e. a method capable to evaluate the dynamic response of substructured
systems, is under development. In the method, the structure is torn into at least two parts,
amongst which some parts called numerical subdomains are computationally simulated while
other parts called physical subdomains are simulated through actual tests in the laboratory.
Pseudo-dynamic testing, continuous pseudo-dynamic testing, fast hybrid testing, real-time
substructure testing and real-time dynamic substructure testing and so on are methodologies
developed within the hybrid simulation framework (Nakashima et al 1992; Darby et al 2001;
The hybrid test is summarized in algorithmic form as follows:
(a) Compute the Jacobian matrix J from (2.5)
(b) Compute k₁ from (2.3) and evaluate 21k α+y .
(c) Impose 21k α+y to the PS, measure the coupling force
21,s kf α+ and evaluate k₂ and 1k +y
from (2.4) and (2.1)
(d) Impose 1k +y to the PS and measure the coupling force , 1s kf +
(e) Set k=k+1 and go to (b)
From the aforementioned description, the integrator doesn't require the knowledge of the state
(displacement and velocity) and of the coupling force ahead of the actual stage and/or of the end
of the time step Δt. This property is referred to as Real-time Compatibility. Furthermore, the
integrator is based on a Runge-Kutta scheme and it is explicit for displacements and velocities,
which is different from most schemes based on Newmark schemes. Because of the explicit
displacement and velocity, better control performance, such as rapid, accurate and stable
responses, should be easily obtained. Because the LSRT2 method is a linearly implicit method, it
is more suitable to real-time test than most monolithic integrators. Moreover, it is filtering
capabilities beyond the Nyquist frequency ΩN=π are favourable as shown in Fig. 2. 2. The
method works well also in the nonlinear regime (Bursi et al. 2010).
State-of-the-art report for JRA2
21
Fig. 2. 2 Spectral radii ñ of real-time complatible integration methods vs non-dimensional frequency Ù
2.1.2.2 The Chang method
The Chang scheme proposed by Chang (2002) provides explicit displacements and is spectrally
equivalent to the famous Newmark constant average acceleration scheme. The Chang scheme,
applied to real-time hybrid simulations, can be expressed as
1 1 1 , 1 , 1( , )k n k k e k s ku + + + + ++ = −M r u u f f (0.7)
1 1( )2k k k kt
+ +
∆= + +u u u u (0.8)
21 2k k k kt t+ = + ∆ + ∆u u uαu (0.9)
In order to obtain stability and better performance, the following parameters must be carefully selected
11 2 1
2 0 01 1 12 2 4
t t−
− − = + ∆ + ∆ βI M C M K (0.10)
11
1 2 0122
t−
− = ∆ ββM C (0.11)
Investigations (Chang 2002) show that its numerical properties are similar to those of the
constant average acceleration method. In this respect, see Fig. The scheme is said to be
unconditionally stable, to exhibit no numerical dissipation and to have no overshooting effect.
However, this is only demonstrated for a linear structure, where K0 represents the constant
-210 10
-1 010 10
1 210 10
0.2
0.6
0.8
1.0
0.4
0.
1.2
γ =
γ =
Ω
ρ
π
State-of-the-art report for JRA2
22
stiffness. Bonnet (2006) and Bonnet et al. (2007) investigated the accuracy of the scheme based
on experimental results that proved the scheme to yield satisfactory results.
2.1.2.3 The CR Method
The development and application to monolithic problems of the Chen & Ricles (CR) integration
scheme were first presented by Chen & Ricles (2008a, 2008b). The scheme is spectrally
equivalent to the Newmark constant average acceleration scheme, with γ=1/2, β=1/4 and is
therefore second order accurate, unconditionally stable, non-dissipative and shows minor period
distortion characteristics when applied to monolithic problems. See Fig. 2. 2 in this respect. The
CR scheme presents a major advantage for RTDS testing over the Chang-scheme (Chang 2002)
because it provides explicit displacements and explicit velocities. The CR scheme, applied to
RTDS tests is described in Equations (2.12), (2.13) and (2.14), as follows:
1 1 1 , 1 , 1( , )k n k k e k s ku + + + + ++ = −M r u u f f (0.12)
1 1k k kt+ = + ∆u uαu (0.13) 2
1 2k k k kt t+ = + ∆ + ∆u u uαu (0.14) The first step of the scheme involves calculating the updated displacements using the second
difference equation in Equation (2.14) and to apply them to the experimental substructure by the
adoption of the following α₁and α₂ matrices:
1 2 20 0
44 2 t t
= =+ ∆ + ∆
MααM C K
(0.15)
Before the start of the numerical integration process, the method requires an initial estimate of the stiffness and damping matrices:
0 ( )n e∂ ∂≈ +
∂ ∂r rKu u
0 ( )n e∂ ∂≈ +
∂ ∂r rCu u
(0.16)
where K₀ and C₀ are the initial estimation of the stiffness and damping matrix, corresponding to
the emulated structure. Because the properties of the numerical substructure are known at all
time, the numerical tangent stiffness and tangent damping matrices can be updated at each step,
if the computation time required is reasonably short.
Several RTDS tests were successfully conducted by Chen & Ricles (2009) and Chen et al.
(2009). It was experimentally demonstrated that the CR scheme is stable and accurate when
State-of-the-art report for JRA2
23
performing RTDS tests. In Chen & Ricles (2008b), the stability of the scheme was investigated
in both the linear and nonlinear regime and it was proven that the scheme is unconditionally
stable as long as the tangent stiffness of the system integrated is of softening type. In both the
aforementioned references, the effect of nonlinear damping in the experimental substructure
wasn't investigated and no conclusions were available on the stability of the scheme for that
particular case. Furthermore, the effect of erroneous estimates of K₀ and C₀ on the order of
accuracy of the scheme are yet unknown.
In fact, even through the velocity of the CR method is explicit, the velocity target is not used in
the test, and furthermore, the linear interpolation of displacement target will induce a velocity
response different from the target. Then the unconditionally stability property may be destroyed.
From this viewpoint, the OSM-RST developed by Wu (2006) might perform better.
2.1.3 Partitioned Schemes
In these methods, the emulated structure is torn into non-overlapping substructures, where an
incomplete solution of the primal field is evaluated using a direct solver, and intersubstructure
field continuity is enforced via Lagrange multipliers applied at substructure interfaces (Gravouil
and Combescure 2001). Given a structure split into two domains, A and B for instance, the
equations of equilibrium on subdomain A at time 1nt + and subdomain B at time
/ ( 1,..., )n j sst j ss+ = , can be written as
1 1 1 , 1 1( , )A A A A A A ATn n n ext n nM u R u u F L+ + + + ++ = + Λ (0.17)
/ / / , / /( , )B B B B B B BT
n j ss n j ss n j ss ext n j ss n j ssM u R u u F L+ + + + ++ = + Λ (0.18) where the state variables u(t) are nodal quantities arising from a spatial discretization and their
derivatives u and u with respect to time t are indicated with superposed dots; AL and BL are
the constraint matrices which express a linear relationship between the two connected
boundaries. In the case of an inelastic material, R depends also on internal variables that, in turns,
incrementally depends on the current kinematical state of the numerical structure. In particular,
for a linear elastic system with classical damping, it holds:
1 1 1 1( , )A A A A A A An n n nR u u K u C u+ + + += + (0.19)
State-of-the-art report for JRA2
24
and
/ / / /( , )B B B B B B Bn j ss n j ss n j ss n j ssR u u K u C u+ + + += + (0.20)
where AK and BK denote the stiffness matrices of the two subdomains, respectively, and the AC
and BC are the damping matrices of the two subdomains, respectively.
The kinematic interface constraints between the subdomains can be written as
/ / 0A A B Bn j ss n j ssL w L w+ ++ = (0.21)
where, in general, w can be a displacement (u), a velocity ( u ) or an acceleration (u ).
2.1.3.1 The GC Method
Gravouil and Combescure (2001) proposed a multi-time-step explicit-implicit coupling method,
labelled as the GC method, which is able to couple arbitrary Newmark schemes with different
time steps in different subdomains. Fig. 2. 3 shows the basic procedure of the GC method.
Gravouil and Combescure proved that the GC method is unconditionally stable as long as all
individual subdomains satisfy their own stability requirements. Moreover, they showed that for
multi-time-step cases the GC method entails energy dissipation at the interface, while for the
case of a single time step in all the subdomains the GC method is energy preserving. The GC
method is very appealing for Real-time testing and in particular for continuous PsD testing as
heterogeneous numerical and physical substructures can be solved with different implicit/explicit
Newmark schemes in different subdomains, according to their complexity and characteristics.
The possibility of performing a large amount of small time steps on a reduced number of DoFs at
the laboratory, at about 1kHz frequency, while computing a large time step on a large number of
DoFs on a remote computer, is mandatory for the proper implementation of the continuous PsD
technique with substructuring. In particular, it maintains the smoothness of the displacement
trajectory without using any extrapolation/interpolation assumption, preserving the optimum
signal/noise ratio of the continuous method.
State-of-the-art report for JRA2
25
Fig. 2. 3 The GC method
Unfortunately, the GC, as can be seen in Fig. 2. 3, is in essence a sequential staggered algorithm
where the tasks in different subdomains are not concurrent. Systematically, the process
performing the fine time steps has to stop in order to wait for the process involving the coarse
time step. This is a drawback for real-time test and continuous PsD applications. In order to solve
this problem Pegon and Magonette (Pegon and Magonette 2002) developed and implemented an
interfield parallel algorithm, the PM method, based on the GC method.
2.1.3.2 The PM Method
The PM method is an extention of the GC method to advance all the domain simutaneously and
continuously, as depicted in Fig. 2. 4. The method for advancing from 1nt − to 1nt + in subdomain
A and from nt to 1nt + in subdomain B can be summarized by the following pseudo-code.
1. Solve the free problem in subdomain A by using 2 At∆ , thus advancing from 1nt − to 1nt +
2. start the loop on ss substeps in subdomain B
3. solve the free problem in subdomain B by using Bt∆ , thus advancing from ( 1) /n j sst + − to
/n j sst + with j=1,…,ss
4. linearly interpolate the free velocity / ,n j ss fu + in subdomain A
5. compute the Lagrange multipliers /n j ss+Λ by solving the condensed global problem
6. solve the link problem in subdomain B at /n j sst +
7. compute kinematic quantities in subdomain B at /n j sst + by summing free and link
quantities
State-of-the-art report for JRA2
26
8. if j=ss, then end the loop in subdomain B
9. solve the link problem in subdomain A by using 2 At∆ , from 1nt − to 1nt +
10. compute kinematic quantities in subdomain A at 1nt + by summing free and link
quantities
Fig. 2. 4 The interfield parallel solution procedure of the PM method
With the PM method, one can divide the whole structure into a subdomain where an implicit
Newmark method can be used and a subdomain where an explicit Newmark method can be
adopted. Moreover, the time step in one subdomain can be ss times that of the other one. This
provides the possibility to synchronize the computations in the two subdomains according to
numerical or physical requirements. As a result, this method can be implemented for parallel
simulations of numerical systems but also for hardware-in-the-loop and continuous pseudo-
dynamic testing.
The method was shown to be conditionally stable as the stability of the explicit subdomain
determines the stability of the emulated problem. As soon as Bt∆ satisfies the stability condition
(Bonelli, 2008), a rising of ss does not have any impact on the stability. Regarding the accuracy,
the scheme is still second order accurate when ss is equal to one, but it becomes first order
accurate when ss is larger than one, typical of partitioned schemes. An explanation for that can
be found in Jia (2010). The numerical damping ratio which is determined by the energy
dissipated at the interface is rather limited and similar when the number of substep is different
except of being unity, which corresponds to a non-dissipative case. Compared with the GC
method, the PM method exhibits an accuracy which is related to 2 At∆ instead of At∆ and
numerical analysis shows that it results to be less dissipative than its progenitor GC method.
State-of-the-art report for JRA2
27
Bursi et al.(2010) extended the properties of the interfield parallel PM method by introducing the
Generalized-α method into it. In detail for this partitioned method the Generalized-α method was
developed while avoiding a balanced formulation of the equilibrium equations. It was shown that
the controllable numerical dissipation can be advantageous for solving coupled and/or
heterogeneous structural dynamic systems, where convergence and/or computational efficiency
can be adversely affected by spurious high-frequency components of the response, entailed by
Fig. 3. 6 Cyclic test N.4: Comparison between AEPs, strain gauges and fiber optic sensors.
In particular the sketch of Fig. 3.6 based on the assumption that plane sections remain plane,
illustrates the favourable performance of external fibers. In summary, results of substructure
tests confirm that the fibre optic system externally installed allowed the plastic moment to be
estimated measuring strains up to 1 per cent, thus detecting a neat hysteretic behaviour.
The design of a full scale test was performed considering the dimensions of a real,
seismic vulnerable metro tunnel. In detail, a circular section with the exterior pipe diameter equal
to 4.8 m, thickness equal to 0.2 m and the tunnel axis depth equal to 20 m was chosen. As far as
the seismic condition is concerned, the worst case for the structural safety of a lining is that in
which the direction is inclined at 45°, because maxima of seismic actions are summed to maxima
of static loads. In this way the maximum moment is reached at 0°, 90°, 180° and 270°,
-200
-150
-100
-50
0
50
100
150
200
-50 -40 -30 -20 -10 0 10 20 30 40 50
M [kNm]
θ [mrad]
EXT. FIBERS
STRAIN GAUGES
INT. FIBERS
State-of-the-art report for JRA2
99
respectively. This was the choice for the full scale test whose sketch is reported in Fig. 3.7. In
detail, the chosen application of an axial force by means of steel ropes on a cylindrical bearings
system proved to be the most efficient solution, with respect to friction losses; the ovalling of the
section by two hydraulic actuators allowed for a good representation of the stress state predicted
by the Penzien-Wu method (1998). Both the specimen and the testing equipment with sensors
are shown in Fig. 3.7 and 3.8, respectively.
Fig. 3. 7 Full scale test set-up of the tunnel ring (dimensions in cm)
934
1123
State-of-the-art report for JRA2
100
Fig. 3. 8 Full scale test set-up of the tunnel ring (dimensions in cm)
It was decided to apply the ECCS procedure (Technical Committee 1, TWG 1.3, 1986) both for
the substructure specimens and the final test. The loading protocol was proportional to a
conventional displacement δy which represents the elastic-plastic transition of the cross section.
This parameter was estimated to be about 60 mm, because a monotonic test was not possible.
The displacement history exerted by Actuator N. 1 is depicted in Fig. 3.9.
Fig. 3. 9 Comparison between actuator inner displacement and wire 2-6
The pre-straining values recorded in Section 1 –see Fig. 3.8- by external unbounded AOS fibers
are highlighted in Fig. 3.10. An average value of -120 μm/m was imposed.
State-of-the-art report for JRA2
101
Fig. 3. 10 External unbounded AOS fiber data in Section 1 for the pre-straining phase.
Typical recorded values both for bonded and unbounded AOS fibers are shown in Fig. 3.11 and
3.12, respectively. The vertical line indicates the failure of Section 8.
Fig. 3. 11 Inner bonded AOS fiber data in Section 2 during the ECCS phase.
State-of-the-art report for JRA2
102
Fig. 3. 12 External unbounded AOS fiber data in Section 8 for the ECCS phase.
In order to offer a result overview, maximum values of deformation for each instrumented
section and comparison with the value of εy of longitudinal reinforcing bars were gathered in
Table 3.1.
Table 3. 1 Maximum deformations for each instrumented section and comparison with εy of longitudinal reinforcing bars.
Section εmax AOS fiber inside ring
Comparison with εy of
reinforcing bar
εmax AOS fiber outside ring
Comparison with εy of
reinforcing bar
1 0.1209% < 0,3% 0.0532% < 0,3%
2 1.1880% > 0,3% 0.4630% > 0,3%
3 0.0212% < 0,3% 0.0129% < 0,3%
4 0.8724% > 0,3% 0.5939% > 0,3%
5 0.0480% < 0,3% 0.0226% < 0,3%
6 0.5380% > 0,3% 0.2146% > 0,3%
7 0.1751% < 0,3% 0.0334% < 0,3%
8 0.6308% > 0,3% 0.5014% > 0,3%
State-of-the-art report for JRA2
103
As expected, fibers measured higher deformations in the Sections 2,4,6 and 8 where plastic
hinges formed. In detail, external AOS fibers approached maximum values of about 0.6 per cent
in Section 8, whilst inner AOS fibers reached maximum values of about 1.0 per cent in Section
2.
Inner or embedded AOS fibers showed readings less disturbed than the ones provided by AOS
external fibers located in sections without plastic hinges, i.e. Sections 1-3-5 and 7. Also Brillouin
fibers showed measurements in agreement with those of AOS fibers until the failure of Section 8.
However, relevant signals were disturbed by the presence of a persistent noise. The
measurements of temperature, provided by the AOS fibers N. 2i, 2o, 3i, 3o, 4i, 4o, 6i and 6o
indicated variations of temperature between 19,05 °C and 21,51 °C. The relevant variation of
about 2 °C for the 4 hours test, was consistent with its duration.
There are a number of commercially available fibre-optic interrogation systems which are based
on amplitude measurement with a microbending sensor (OSMOS) or spectrum measurement
with a fibre Bragg grating (Micron Optics, Insensys, Smart Fibres, Blue Road Research). There
also exist measurement systems which utilize interferometry, such as the Fabry-Perot system
(Fiso Technologies, Roctest), or a low coherence interferometer (Smartec, Fox-Tek, Fogale
Nanotech). Furthermore, some systems are based on measuring frequency changes in Brillouin
backscattered radiation by means of Brillouin optical time domain reflectometry (BOTDR,
Yokogawa Electric) or Brillouin optical time domain analysis (BOTDA, Omnisens).
Concluding Remarks
The use of fibre optic technology has been a common practice in structural monitoring systems.
There are many articles available on the successful applications of this technology and on its
development. The commercial fiber optic sensors stated above have been proved to be successful
in structural monitoring, and they can reliably be employed in this respect.
State-of-the-art report for JRA2
104
3.3 MICROELECTROMECHANICAL SYSTEMS
Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors,
actuators, and electronics on a common silicon substrate through microfabrication technology.
While the electronics are fabricated using integrated circuit (IC) process sequences (e.g., CMOS,
Bipolar, or BICMOS processes), the micromechanical components are fabricated using
compatible "micromachining" processes that selectively etch away parts of the silicon wafer or
add new structural layers to form the mechanical and electromechanical devices.
MEMS are made up of components between 1 to 100 micrometers in size (i.e. 0.001 to 0.1 mm)
and MEMS devices generally range in size from 20 micrometers to a millimeter. They usually
consist of a central unit that processes data, the microprocessor and several components that
interact with the outside such as microsensors. The distinctive features are: miniaturization,
micro-electronics and multiplicity.
MEMS technology will have an impact on engineering in the following ways (Nii O. Attoh-
Okine, 2002):
• By causing orders of magnitude increase in the number of sensors and actuators. • By enabling the use of very large-scale integration (VLSI) as a design and synthesis approach
for electromagnetic. • By becoming a driver for multiple, mixed and emerging technology integration.
• By being both a beneficiary of and a driver for information systems.
In addition to the potential economic benefits MEMS has the ability to integrate mechanical (or
chemical, biological and environmental) functions, It also allows for consideration of concepts
such as the highly distributed networks for the condition monitoring of large civil infrastructure
systems.
Today, a number of MEMS sensors are available off-the-shelf, including transducers
accelerometers, pressure gauges, load cells, gyroscopes, and chemical gauges (Schenk et al.,
2001). Due to their low cost and small dimension, MEMS are likely to radically change the
philosophy of instrumenting test pieces during laboratory tests, as they allows a much more
State-of-the-art report for JRA2
105
refined distribution throughout the investigated structure. Moreover, they can be wireless, i.e.
they need power, but no cables for signals (Jung et al., 2001).
MEMS application in civil engineering is theoretically feasible, even if few applications can be
found in the literature (Lynch et al., 2003). The synergistic combination of MEMS technology
and Opto-electronics has recently evolved into a class of integrated micro systems, expected to
create important new application opportunities. The component areas of MEMS are categorised
as micro-machines, micro-integrated-circuits, micro-optics and diffractive optics. The latter two
are often called MOEMS technologies. In the MOEMS sector, low-cost miniature spectrometers
are key components in the realisation of small-sensor solution for application such as colour
measurement or industrial process control (Grueger et al. 2003).
The high costs associated with commercial monitoring systems can be eradicated through the
adoption of MEMS sensors (C.U. Grosse et al., 2006). MEMS accelerometers are much smaller,
more functional, lighter, more reliable, and are produced for a fraction of the cost of the
conventional macroscale accelerometer elements. Various MEMS-based accelerometers are
commercially available that are mechanically similar to traditional accelerometers but fabricated
on a micrometer scale. An additional advantage of MEMS sensors is their ability to
monolithically fabricated signal conditioning circuity on the same die, resulting in improved
sensor performance and reduced sensor cost in the case of mass-volume production (J. W. Judy
et al., 2001).
In June 2009, Adam Pascale reported the use of MEMS accelerometer for earthquake monitoring.
In this report, he stated that apart from the significant cost saving over traditional force-balance
accelerometers, due to the nature of their design micro-electromechanical systems sensors have a
much better high frequency response. Where most earthquake accelerometers are specified as
having a frequency response of DC to 50Hz, 100Hz or in some cases 200Hz, the seismic-
oriented MEMS sensors have a much higher frequency range. For example, the Silicon Designs
units used in the ESS-1221 sensor have a frequency response of DC to 400Hz, and the Colibrys
SF3000L MEMS sensors extend to 1000Hz.
Rafael Aguilar et al. (2009) have presented the use of MEMS for structural dynamic monitoring
State-of-the-art report for JRA2
106
in historical masonry structures. This new technology offers great advantages such as economy,
time saving and simplicity for the dynamic monitoring systems.
Concluding Remarks
Microelectromechanical system is a very attractive choice for structural monitoring systems.
Although this is relatively newer technology and much remains to be developed, some recent
applications (e.g. Adam Pascale, 2009, Rafael Aguilar et al., 2009) show the potential of MEMS
sensors to be readily used for structural monitoring purposes. Crossbow Technology
(www.xbow.com), which is a reliable company for sensors, produces MEMS sensor systems.
Desired sensors can be provided by them.
3.4 WIRELESS SENSORS AND SENSOR NETWORKS
3.4.1 Introduction
In recent years, there has been an increasing interest in the adoption of emerging sensing
technologies for instrumentation within a variety of structural systems. Wireless sensors and
sensor networks have begun to be considered as the substitution of the traditional tethered
monitoring systems in structural engineering. Wireless sensor networks are inexpensive to
install, they can play greater roles in the processing of structural response data and can even
actuate the actuators.
In structural monitoring systems, because of the high cost of wires, only a few numbers of
sensors (10-20 sensors) are installed in a single structure. Such small numbers of sensors are not
very effective in structural monitoring systems as they are poorly scaled to the localized
behaviour of the structure, often rendering global-based behaviour detection difficult to
implement. With potentially hundreds of wireless sensors installed in a single structure, the
wireless monitoring system is better equipped to screen for structural behaviour by monitoring
the behaviour of critical structural components.
Wireless sensors are autonomous data acquisition nodes to which traditional structural sensors
(e.g. strain gages, accelerometers, linear voltage displacement transducers, inclinometers, among
others) can be attached. Perhaps the greatest attribute of the wireless sensor is its collocation of
To provide more program and data storage and to improve the flexibility of the wireless
communication channel Crossbow released the MICA Mote wireless sensor in early 2002 as the
successor to the Rene 2. Ruitz-Sandoval et al. (2003) have reported their experiences using the
MICA Mote wireless sensing platform for structural monitoring. They proposed (2004) a new
sensor board to replace the existing one of the MICA Mote to address a limitation of this sensor.
In 2003, MICA was modified to improve the reliability of the communication channel.
Since the MICA2 Mote is unable to measure structural strain, Nagayama et al. (2004) implement
a new integrated strain sensor board for the MICA2 Mote that accommodates strain gages.
Pakzad and Fenves (2004) described a study where a novel prototype accelerometer sensor board
is integrated with a MICA2.
Close research collaboration between the University of California-Berkley and the Intel Research
Berkley Laboratory has resulted in a next-new generation Mote Platform called iMote. iMote
employ a highly modular construction allowing sensing interfaces fabricated as separate boards
to be snapped onto the iMote circuit board.
3.4.5 ZigBee and 802.15.4 Overview
The ZigBee Alliance [ZIG05] is an association of companies working together to develop
standards (and products) for reliable, cost-effective, low-power wireless networking and it is
foreseen that ZigBee technology will be embedded in a wide range of products and applications
across consumer, commercial, industrial and government markets worldwide.
ZigBee builds upon the IEEE 802.15.4 standard which defines the physical and MAC layers for
low cost, low rate personal area networks. It defines the network layer specifications, handling
star and peer-to-peer network topologies, and provides a framework for application programming
in the application layer.
3.4.6 IEEE 802.15.4 Standard
The IEEE 802.15.4 standard defines the characteristics of the physical and MAC layers for Low-
Rate Wireless Personal Area Networks (LR-WPAN). The advantages of an LR-WPAN are ease
State-of-the-art report for JRA2
114
of installation, reliable data transfer, short-range operation, extremely low cost, and a reasonable
battery life, while maintaining a simple and flexible protocol stack.
3.4.7 Field Deployment of Wireless Sensors in Civil Infrastructure Systems
The deployment of wireless sensors and sensor networks in actual civil structures is perhaps the
best approach to assessing the merits and limitations of this nascent technology. In particular,
bridges and buildings provide complex environments in which wireless sensors can be
thoroughly tested. The transition of wireless monitoring systems from the laboratory to the field
has been demonstrated by a number of research studies. In all of these studies, the goal of the
researchers has been to assess the performance of a variety of wireless sensor platforms for the
accurate measurement of structural acceleration and strain responses. Common to most of the
studies reported, the sensitivity and accuracy of the wireless monitoring systems are compared to
that of traditional cable based monitoring systems which have been installed alongside their
wireless counterparts.
Perhaps the earliest field validation of wireless telemetry for monitoring the performance of
highway bridges was described by Maser et al. (1996). Their wireless monitoring system, called
the Wireless Global Bridge Evaluation and Monitoring System (WGBEMS), consists of two
levels of wireless communication. After completing the design of their academic wireless sensor prototype, Straser and Kiremidjian
(1998) utilized the Alamosa Canyon Bridge to validate its performance. Comparing the
acceleration response of the bridge measured by the wireless sensors and the tethered monitoring
system, the time-history response records showed in strong agreement. Using the same bridge as
Straser and Kiremidjian (1998), the performance of the wireless sensing prototype developed by
Lynch et al. (2003a) was validated in the field.
Galbreath et al. (2003) demonstrate the use of a wireless sensor network to monitor the
performance of a steel girder composite deck highway bridge spanning the LaPlatte River in
Shelburne, Vermont. They select the Microstrain SG-Link wireless sensor platform to measure
flexural stain on the bottom surface of the bridge girders.
State-of-the-art report for JRA2
115
Aoki et al. (2003) have outlined the validation of their Remote Intelligent Monitoring System
(RIMS) wireless sensor platform. To test the accuracy of their wireless monitoring system, field
tests are performed using a flexible light pole mounted to the surface of the Tokyo Rainbow
Bridge, Japan. With fatigue failure common in light poles subjected to frequent excitation, the
study is intended to illustrate the potential of the RIMS wireless monitoring system to monitor
the long-term health of non-structural components on bridges.
Chung et al. (2004a, 2004b) have described a detailed study taken to validate the performance of
their DuraNode wireless sensing unit prototype. Using two different MEMS accelerometers
(Analog Devices ADXL210 and Silicon Design SD1221) interfaced to the wireless sensing unit,
the ambient and forced response of a 30 m long steel truss bridge is recorded. To compare the
accuracy of the wireless monitoring system, a traditional cable-based monitoring system is also
installed; the cable-based system uses piezoelectric PCB 393C accelerometers as its primary
sensing transducer. Results from the field study show very strong agreement in the acceleration
time histories recorded by both the wireless and cable-based monitoring systems.
Binns (2004) has presented a wireless sensor system developed by researchers at the University
of Dayton, Ohio for bridge monitoring. The wireless monitoring system, called WISE (Wireless
InfraStructure Evaluation System), can perform wireless monitoring of bridge structures using
any type of analog sensor.
Lynch et al. (2005) have installed 14 wireless sensing unit prototypes to monitor the forced
vibration response of the Geumdang Bridge in Korea. The Geumdang Bridgeis a newly
constructed concrete box girder bridge continuously spanning 122 m. The vertical acceleration of
the bridge is measured by the wireless sensing units using PCB 3801 capacitive accelerometers
mounted on the interior spaces of the box girder. In tandem with the wireless monitoring system
is a cable-based monitoring system with PCB 393C piezoelectric accelerometers mounted
adjacent to the wireless sensing unit accelerometers. The stated goals of the field validation study
are to assess the measurement accuracy of the wireless sensing units, to determine the ability of a
central data repository to time synchronize the wireless sensor network, and to use the wireless
sensors to calculate the Fourier amplitude spectra from the recorded acceleration records.
Comparing the recorded time histories of the bridge using both monitoring systems (wireless and
cable-based), the accuracy of the wireless sensing units is confirmed. In addition, the time
State-of-the-art report for JRA2
116
synchronization procedure implemented by Wang et al. (2005) is shown to be perfect for almost
all of the wireless sensing units.
Jin-Song Pei et al. (2007) have carried out an experimental study to investigate the reliability
issue of applying wireless sensing to structural health monitoring. They have developed a
wireless unit by using an off-the-shelf microcontroller and radio components; software has been
developed to capture the loss of data using a flexible payload scheme when transmitting
vibration data from a shake table through various building materials.
Kincho et al. (2008) have performed laboratory experiments that are designed to assess the
viability of decentralised wireless structural control using a six-story scaled structure. Multiple
centralized/decentralized control architectures based on different communication and
information processing schemes are investigated. The results indicate that decentralized control
strategies may provide equivalent or even superior control performance, given that their
centralized counterparts could suffer longer feedback time delay due to wireless communication
latencies.
Lynch et al. (2008) have proposed a wireless sensor prototype capable of data acquisition,
computational analysis and actuation for use in a real-time structural control system. The
performance of a wireless control system is illustrated using a full-scale structure controlled by a
semi-active magnetorheological (MR) damper and a network of wireless sensors. The wireless
control system proves effective in reducing the inter-storey drifts of each floor during seismic
excitation. Particularly for the case of acceleration feedback control, the wireless control system
performs at a level of performance equivalent to a baseline wired control system for both far- and
near-field seismic excitations.
Jian-Huang Weng (2008) have presented two modal identification methods that extract dynamic
characteristics from output-only data sets collected by a low-cost and rapid-to-deploy wireless
structural monitoring system installed upon a long-span cable-stayed bridge. The use of wireless
sensors has been very effective and during data collection, the wireless monitoring system
experienced no data loss as a result of a highly-robust communication protocol.
State-of-the-art report for JRA2
117
Kung-Chun Lu et al. (2008) have designed a wireless sensing system for application to structural
monitoring and damage detection applications. To validate the performance of the proposed
wireless monitoring and damage detection system, two near full scale single-story RC-frames,
with and without brick wall system, are instrumented with the wireless monitoring system for
real time damage detection during shaking table tests. The accuracy and sensitivity of the
MEMS-based wireless sensors employed are also verified through comparison to data recorded
using a traditional wired monitoring system.
3.4.8 Reliability assessment of wireless sensors in the University of Trento
In the University of Trento, a project named “MEMSCON” (Pozzi et al., 2009) had a task of
assessing the accuracy and reliability of the wireless sensing system in condition similar to that
experienced in field during a seismic event. Under this task the performance of the sensors has
been tested by mounting them on a shaking table, back to back with high precision, wired piezo-
electrical seismic accelerometers, instrumented with traditional accelerometers and drivable with
harmonic or random excitations.
Each sensing node used in the tests (MOTE unit) is packaged in a plastic box of dimensions
11x8x4cm, endowed with a 19cm high antenna. The weight of a node is 150g, and it contains a
tri-axial accelerometer, permitting the following performances:
Sampling rate: 100Hz
Resolution: 18mg (=0.18m/s2)
Range: from -2g to +2g (= from -20m/s2 to+ 20m/s2)
Sampling period: up to 30 seconds
Fig. 3. 5 shows an overview of the system base station and a node that have been used during the
laboratory tests. A single base station, connected via USB to a standard PC, is able to acquire
vibration measures from many nodes in a range of dozens of meters, even inside a building.
State-of-the-art report for JRA2
118
Fig. 3. 15 A Base Station and a MOTE Unit
Testing scheme reflects the aim to acquire data independently and simultaneously from reference
and wireless sensors, subjected to the same excitation generated by the shaking table.
Fig. 3. 16 Testing Scheme
State-of-the-art report for JRA2
119
Fig. 3. 17 Laboratory Test Layout
Two types of tests have been performed:
1) Calibration tests on the wireless sensors, for the 3 different axes and for different frequency.
2) Simulation of an earthquake scenario, in order to study the behaviour of the sensors during an
earthquake.
Calibration Tests
During the calibration tests, a small shaking table was driven using harmonic wave forms
established by the operator via a function generator. For each direction of the sensors (X, Y, Z),
tests at 1, 2, 4, 8, 16 Hz were carried out. Each testing frequency was repeated twice at different
wave amplitude, obtaining acceleration peaks about of 1 m/s2 or 4 m/s2 (test called “Low
Amplitude” and “High Amplitude”, respectively). In summary, 33 calibration tests were carried
out. The sensor arrangements in X, Y and Z directions are shown in Fig. 3.18
State-of-the-art report for JRA2
120
Fig. 3.18 Sensors arrangements; (a) Tests on X axis, (b) Tests on Y axis, (c) Tests on Z axis
The calibration was performed by using the “back to back” mounting scheme, by a direct
comparison between the reference accelerometer and the accelerometer to be tested. Two
parameters, acceleration factor and frequency factor were measured. In particular, the
“Frequency Factor” is a parameter which quantifies the effective sampling rate of the wireless
accelerometer comparing that with the rate of the wired system, while “Acceleration Factor” is
a parameter which permits to know the sensitivity of the instrument. For the three wireless
sensors used (WL1, WL2, WL3), the mean values of acceleration factors and frequency factors
are presented below.
WL1 WL2 WL3
Acceleration Factor 1.0327 1.0147 0.9927
Frequency Factor 1.0075 0.9748 0.9921
The time histories and the spectra of the wireless accelerometer WL1 and the reference wired accelerometer B12-1 using these parameters and after pre-processing and fitting are shown in Fig. 3.19. These results show good agreement between the signals acquired by the two types of accelerometer.
State-of-the-art report for JRA2
121
Fig. 3.19 (a) Fitted time histories of the sample test using test’s parameters;
(b) FFT of the fitted time histories of sample test applying test’s parameters
Earthquake Simulations
Besides the calibration tests, additional tests were carried out with the aim of better
understanding the behaviour of wireless sensors during a seismic event. A 2-storey
steel/aluminum frame was mounted on a shaking table. By installing wireless and wired
instruments back to back on the frame, three different time histories were induced: one on the
table and two on the frame floors, correlated by the mechanical properties of the structure. In
particular, 2 modulating frequency tests (“SWEEP TEST”) and 1 random input test with
frequency of the shaking table arranged at about 2 Hz (resonance frequency of the first vibration
mode of the frame) were carried out. The most important physical properties of the frame are
attainable by these still cameras quite low, but adapted to quasi-static experiments. Other
solution based on true measurements on each pixel may be more satisfying for our purpose.
Although tracking programs need only black and white information, the adding of colour
information could give help in some case, but most of the time black and white is sufficient.
(5) Sensors’ Noise sources
If one wants to use a camera as a measuring instrument, the noise problem must be in some way
dealt with. In what follows, the main noise sources are shortly described, more information can
be found in Janesick, J, 2001, 2007, Reibel 2003.
Dark Current:
A semi-conductor in thermodynamic equilibrium has a continuous, random generation and
recombination of electron-hole pairs, and the generation-recombination number increases with
temperature. Electrons freed in this process may eventually recombine, or may be trapped in the
pixel potential well and be a source of random noise: this is known as “Dark Current”(DC).
Moreover, the generation rate may vary over the pixel array. Special designs have been made to
avoid the main sources of dark current in CCD, that are the interfaces (e.g. Multi-phase pinned –
MPP- devices that reduce the DC to 10 pA/cm2 at 300 K). Custom design are currently under
refining to obtain the same results for CMOS.
State-of-the-art report for JRA2
136
Photon shot noise:
Light is not continuous as it is a finite number of photons that arrive on a given surface per unit
time. Even under “constant” illumination, the photon arrival will follow a Poisson statistic, with
a standard deviation equal to the square root of the average number. Considering a pixel of area
A with a quantum efficiency η, submitted to an average of n photons per unit time, the signal
value is n A η t and the photon shot noise would be (n A η t)1/2 where t is the exposure time.
Read noise:
The read-out amplifier has is own noise, as previously discussed. It may increases with the
reading frequency. The pixel reset before charge accumulation has its own noise, that can be
included in read out noise.
Quantisation noise:
Round off errors are introduced during the analog to digital conversion.
Amplifier glow:
This is given here just for its peculiarity, as when the read-out amplifier is working, it emits
infra-red radiation that induces free charges in the adjacent pixels, thus provoking a glow on the
sensor zone in the vicinity of the amplifier.
Fixed Pattern Noise (FPN):
Electronics properties and geometry of pixels are varying over all the chip surface. This non-
uniformity of the pixels response, that gives systematic errors, has been called Fixed Pattern
Noise. It may be removed by measuring the flat-field response of the sensor, that is obtained by
illuminating the sensor with a uniform light. Average of multiple samples of the flat field will
attenuate the effect of other noise sources. The FPN bias will then be removed by dividing pixel
to pixel the frame value by the normalised flat-field response. It is noteworthy that other parasitic
effects like dust or deposits on the sensor or optics could be assimilated as an addition to basic
FPN, to be treated by flat-field. This should be done when the camera is positioned and its optics
properly set to the experimental conditions.
State-of-the-art report for JRA2
137
(6) Sensors’ noise parameters
Dynamic range:
The dynamic range of the sensor is given by the ratio of full well capacity to sensor rms noise in
the dark. It may be expressed in decibels. The number of quantification levels of the signal
should be in harmony with the dynamic range.
Signal to noise ratio (SNR):
The SNR is given by the ratio of signal electrons to noise electrons. If the signal has been
correctly cleaned from FPN, and when DC of sensor and read-out noise are negligible in front of
shot noise, the SNR varies as (n A η t)1/2 , while for effective DC and read-out noise it would
behave as n A η t (constant read-out noise).
(7) Present status of CCD and CMOS sensors
CMOS are getting to the quality of CCD, but the cost of development was bigger than expected.
The advantage of CMOS is that it can attain higher frame rate (frames per second FPS) than
CCD, and has a lower consumption. Manufacturers are presently increasing the CCD’s FPS, but
the read-out noise becomes a difficulty.
Recent news from Fairchild and PCO about a new concept of scientific CMOS gives hope to
those willing to reach high FPS, keeping with the high quality of CCD sensor. This needs to be
confirmed by the market.
(8) Future of CCD and CMOS sensors
The CCD technology seems to be mature, while the CMOS technology benefits from evolution
in semi-conductor industry, whether on the level of planar integration or in the new domain of
3D integration.
Sensor Planar Technology: the advent of new integration techniques will dramatically modify the
imaging sensor properties. Techniques are developed to create permanent electric field in the
semi-conductor (by gradient of enrichment) so that the potential well would be more extended
and charge trapping better stabilised (Bogaert, 2007). It is even possible to cut trenches around
State-of-the-art report for JRA2
138
pixels and fill them with ad-hoc compound, so that stabilising electric field would be permanent
on the border of the pixel. These new design will strongly reduces the cross-talk of CMOS
devices. These new sensors are first developed for space application, where there is a need to
replace the usual CCD by CMOS, that are less sensitive to radiations.
New techniques of 3D integration: the 2D, planar concept of the integrated circuit that prevailed
up to now in the semiconductor industry is superseded in recent years by the 3D concept. This
new technology will touch every electronic component from CMOS image sensor, or DRAM to
CPU. Basically, the various functions of a sensor could be integrated in successive packed up
tiers. For example, the CMOS pixel surface can be freed from the readout electronics, in a 3D
design where the first tier will correspond to the photosensitive surface, and the second tier to the
read-out electronics. Both layers are electronically interconnected by “Through Silicon Via”
perpendicular to the sensor plane. Besides the increased fill-factor, many advantages result from
3D design: all the interconnections are reduced, the photosensitive substrate can be fully adapted
to the spectral range of the sensor, the new design will favour parallel processing (e.g. direct
connection to an FPGA tier). Last but not least, the introduction of the third connecting
dimension frees the borders of the chip and renders it 4-edges buttable, which means that larger
sensors can be build by paving of identical smaller sensors (see Ziptronix, among others).
3.5.3.2 Time of flight sensors
The Time of flight (TOF) sensors consist in a pulsating near infra-red source (usually LED
source) associated with an imaging sensor (sensitive to the given IR range). Under usual constant
IR light, the sensor’s pixels would give an image of the scene IR reflectance. When the IR light
is modulated (with a few tens of MHz), each pixel registers a signal having a phase shift
proportional to the distance of its corresponding object point. Thus, a TOF sensor is able to give
the scene reflectance and distance in each pixel (Oggier et al. 2007). Phase folding is limiting the
range of distance as function of modulating frequency. The main noise source of these sensors is
the shot-noise, this results in some uncertainties on the measured distance, of the order of ± 10
mm at 6 m for a bright surface (it can be of ± 74 mm at 5 m for an “obscure” surface).
State-of-the-art report for JRA2
139
Fig. 3. 26 The Swissranger® SR4000 range camera
This type of sensor has been first conceived by CSEM in 1998, and brought to market in 2001
under the name SwissRanger, since 2006, the SwissRanger is commercialised by a spin-off of
CSEM, and numerous companies have started producing their own product, based on the same
principle (Kolb et Al. 2010). It seems that the depth accuracy can be improved with special
processing, but it is still rather low, hence it does not comply with the resolution standards
necessary for the task at hand.. Although these sensors are presently limited to a resolution of at
most 2042 pixels, their use in conjunction with a stereo system may be useful.
3.5.3.3 Optical Calibration
The calibration of optical system is an essential stage for obtaining accurate results, but is not
such an easy task in a civil engineering lab, because the optical system has to adapt to many
different experimental situations.
This step will be illustrated with the calibration of a stereo rig made during a recent experiment
that was performed at ELSA (FUTURE-Bridge project), on a fibre reinforced composite bridge
beam. On the general view of the experiment shown in Fig. 3. 9, the half-part of the bridge that
was observed is the most distant one. The 2 cameras were disposed on the ground, 4 meters apart
from each other and 9 meters from the bridge (the direction of observation is indicated by a red
arrow). These cameras are based on monochrome KAF1602 Kodak CCD image sensors that
have a pixel resolution of 1536 x 1024 (with a pixel size of 9 μm). Each camera delivers
State-of-the-art report for JRA2
140
monochrome image on 12 bits, with an effective dynamic of 68 dB, and is equipped with AF
Nikkor 24-50 mm lenses.
The common field of view of the stereo rig corresponds to a span of ~6 m on the bridge. The
photo of the bridge as seen by the right camera is shown in Fig. 3. 9.c. In this figure, the
perspective effect gives an approximate scale –along the bridge- varying from 3.6 mm/pixel (on
the right side) to 4.35 mm (on the left side of the photo). Indeed these scales are reversed for a
left view and if we take as example a point on the bridge on the right side, a reduction of size of
20% will be seen passing from right to left view –plus some distortion- the matching process will
have to compensate these changes of scale in order to accurately associate the pixel pair, as will
be seen later.
The calibration of the stereo rig was accomplished with a mobile chessboard -Fig. 3. 9.b- that
was successively located in a dense network of positions -Fig. 3. 9.a & d-, in the space between
bridge and cameras. On each position of the chessboard, a sequence of 20 frames was shot, in
order to reduce the noise by averaging. The points at cross-lines of the chessboard were
recognised and referred to the proper coordinates system of the chessboard. It is noteworthy that
in many positions the chessboard was only partly in the common field of view of both cameras.
The correspondence between points in the left and right views was resolved, giving two sets of
singletons corresponding to points exclusively appearing on left or right view, and a set of
coupled pixel points corresponding to matched points (in the common field of view). All this
work was automatically done with an in house program. Singleton sets, and pair sets were then
processed with the method of Bouguet (Bouguet, 2007), in order to obtain the intrinsic
parameters of each camera and the extrinsic parameters related to their mutual positioning in
space.
State-of-the-art report for JRA2
141
Fig. 3. 27 Calibration of the stereo rig
a) top view of the set of chessboard positions, the frame has its origin on the left camera and its
Y axis is aligned with the optical axis of this camera. b) mobile chessboard, c) right view of the
beam, d) side view of the set of chessboard positions. The side part of the bridge’s beam is
shown in green in a and d, and the corresponding zone is shown in c
In Fig. 3. 9.a, the position of each object is drawn in the frame of the left camera, that have its
origin at the centre of the CCD sensor, with the first axis parallel to the pixel lines (1536 pixels)
and the second axis parallel to the pixel columns (1024 pixels). The third axis is along the optical
axis of this camera. Both cameras have been set on the same zone of interest of the bridge, thus
their optical axes were convergent to a point in the centre of this zone. The laboratory vertical
axis is obtained by averaging on the set of local vertical vectors of the chessboard. Both cameras
were disposed on the floor, so that the sight “under the bridge” would be feasible as far as
possible, in consequence, their optical axis are not horizontal. This is clearly illustrated in Fig. 3.
9.d, furthermore the lower limits of the chessboards pertain to an horizontal plane at 30 cm from
the floor, giving the origin of laboratory vertical axis.
State-of-the-art report for JRA2
142
Fig. 3. 28 Optical distortion of the right camera
In a is shown the distortion effect, amplified by a factor 10, in order to reveal the characteristic
pincushion distortion. In b the error modulus is exhibited, with a maximum of 18 pixel on the
border
A plot of the optical distortion of the right camera is shown in Fig. 3. 10, with a 10 time
amplification to show the pincushion distortion. In b, the modulus of distortion displacement is
shown as a surface, as function of pixel coordinates. The distortion could go up to 20 pixels in
the corner. Indeed, this error cannot be neglected in the 3D reconstruction and every pixel
coordinates has been corrected for distortion in what follows.
3.5.4 Tracking methods
Up to now many tasks like monitoring of building deformations or displacements were solved by
means of artificial targets on the objects of interest. The extraction of "interesting points" from
the object surface (e.g. window corner), which can replace the artificial targets is another
interesting and developing optical method for monitoring seismic movements (Reiterer et al.
2008). The method uses learning-based object recognition techniques to search for relevant areas
to collect robust interest point candidates to be long-term tracked to provide a deformation
database. The task of deformation analysis is on one hand based on a traditional geodetic
deformation analysis process and on the other hand on a new developed procedure called
State-of-the-art report for JRA2
143
deformation assessment. The main goal of this development is to measure, analyse and interpret
object deformations by means of a highly automated process.
An alternative is to use natural texture on the object (Capéran, 2007b), or a mix of artificial
texture and targets (Anthoine, 2008), as in what follows.
3.5.4.1 Targets networks and artificial texture on the bridge
The beam and the concrete slab were painted with a random texture, and a loose mesh of targets
was superimposed on it. A high definition view of the artificial texture of the bridge is displayed
in Fig. 3. 11.a, with no perspective effect. In b) and c) are shown the same zone as seen by the
left and right cameras. The smearing of detail, due to the averaging on each pixel, is evidenced.
However, it appears that this is sufficient to follow material points with an accuracy better than
0.1 mm.
Fig. 3. 29 a) close-up view of the random texture of the bridge, b) corresponding window on left camera c) corresponding window on right camera
State-of-the-art report for JRA2
144
Fig. 3. 30 Synopsis of the tracking method
State-of-the-art report for JRA2
145
3.5.4.2 Tracking method and image matching
Fig. 3. 31 Illustration of the matching method
between left WOI (a) and right WOI (b) that is the C3 analytical template. This is compared to a
reduced left WOI (c) by way of their difference (not squared here, in d). The analytical template,
State-of-the-art report for JRA2
146
sampled on the red cross network (see (e)), is interpolated on the distorted blue network until the
–squared- difference has reached its minimum (final direct difference is exhibited in (f))
The matching between two initial frames -e.g. matching from right to left view- consists in
associating each pixel of the right frame to its position –expressed with sub-pixel accuracy- in
the left view. In practice, an ensemble of Windows Of Interest (WOI) is chosen in the right view,
and these WOI are matched through adequate transformations to their corresponding stereo-
paired windows in the left view. This could be done taking into account the epipolar lines that
are associated to each right view pixels in the left view frame, but this is not the case here.
The tracking consists in choosing some WOIs (with a side ~ 11 pixels) in the initial reference
frame for one camera, and to follow these WOI on the successive frames of the run (of the same
camera). These WOIs could correspond indifferently to targets or texture. Indeed the technique
used for matching or tracking is identical.
This technique is described in (Capéran, 2007), and is of the type first described by (Lucas &
Kanade, 1981) –see the synopsis in Fig. 3. 12-. It can adapt to deformation and translation of the
reference WOI (reference template), under varying light conditions. The transformation of the
initial, reference template is modelled by 8 parameters: 2 parameters for the translation, 4
parameters for linear deformation, and 2 parameters for the lighting condition. The cost function,
which is the squared difference between the template and the current image, is minimized with
respect to these 8 parameters. As the reference template is interpolated by C3 thin plate splines,
the cost function has an analytical expression as function of transformation parameters, and
classical Newton-Raphson technique could be used to find a minimisation at each step. The
gradient and the Hessian matrix involved in this optimisation process are straightforwardly
derived, at any parameter point, from the interpolated cost function, and sub pixel approximation
is naturally introduced in this way.
An illustration of this matching technique is given in Fig. 3. 13, where a right WOI (b) has been
sampled and used as a reference to be matched on the left initial image (a). The reduced
comparison domain is shown in (c). In (e) are shown the red mesh corresponding to data-sites on
which the template (b) is known. The red dots network corresponds to the data site of the
State-of-the-art report for JRA2
147
reduced comparison domain (c), and the distorted and translated blue network corresponds to
interpolation points of the template, before subtraction with the left reduced window (c). Note
that passing from red dots to blue dots expresses the linear transformation from (c) to central part
of (b), this corresponds to an expansion and a shear –with translation- , as can be seen by
comparing (a) and (b). The initial difference is shown in (d), and the final difference in (f), when
it remains only noise.
The initial 3D model of the beam seen as a green surface in Fig. 3. 9, comes from a complete
matching of the green rectangle in the right view, followed by an ad-hoc triangulation (with
compensation of optical deformation), to reconstruct the 3D geometry. The use of the points
tracking at successive times, combined with the matching of the initial stereo pair, allows to
follow material points on the bridge during all the experiment.
To explain the benefits one can expect from vision measurements in civil engineering, some
results obtained on the “Future Bridge” tests will be exposed. They are partly extracted from the
report on vision system made during the “Future Bridge” project. These measurements are done
in stereo view, a technique that was already tested in our laboratory with low cost cameras
(Lathuilière & Capéran, 2007). Then we will expose a real time tracking on a testing loop.
State-of-the-art report for JRA2
148
3.5.5 PsD methodology: an example of stereo-vision measurements on the Future Bridge Project
3.5.5.1 Description of the experiment
Fig. 3. 32 Perspective view of the bridge
A section is shown on the right, with the FRP shell, the sandwich board in between FRP and
concrete slab. A detail on the left shows the connection between FRP, sandwich and concrete
slab through shear studs
A perspective view of the “Future Bridge” experiment is shown in Fig. 3. 14. The direction of
observation of the stereo rig is given by the red arrow. A section of the bridge is given in the
right low corner of the image: it can be seen that the bridge is basically composed of three
elements that are the composite shell in red, the sandwich panel, in green, and the concrete slab,
in blue. The composite shell, hereafter named FRP shell, is reinforced with diaphragms, as can
be seen at the nearest extremity in Fig. 3. 14. A detail of the connections between concrete slab,
sandwich and FRP is given is the upper left corner of the image. The concrete slab has steel
State-of-the-art report for JRA2
149
reinforcement (not shown here), and connection between the three constituents is ensured by
shear studs going through FRP shell and sandwich panel, and anchored to the concrete slab
during its casting. Two types of shear studs were disposed on the 14 m long bridge, type 1 in the
central part of the bridge on a length of 6 m, and type 2 on the rest of the bridge (some
measurements related to shear studs will be exposed below). The section in the upper left corner
of Fig. 3. 14 shows only one stud, in fact they were disposed in two parallel lines, alternatively,
with a constant period.
The setting-up of a proper reference frame linked to the bridge was accomplished by first
extracting a mean vertical direction from the set of chessboards positions (and thus a horizontal
plane). The longitudinal axis of the bridge was found by fitting a plane to the surface of the beam
(appearing in green in Fig. 3. 9, deduced from a method described in the following) and
computing its intersection with the horizontal plane. The third direction was constructed from
vertical and beam longitudinal directions to obtain a direct system of reference. The origin of the
frame depends of the type of study made on the bridge, e.g. if the zone of interest would be the
slab, the origin would be chosen on one of the targets toward the centre of the bridge.
Thus after calibration, the surface of the bridge at initial position can be deduced, from which a
reference system linked to the initial geometry of the bridge can be deduced. The first coordinate
relates to the direction perpendicular to the bridge and horizontal in the laboratory, the second
relates to the longitudinal axis of the bridge and the third relates to the vertical direction of the
laboratory.
For the run considered here, the bridge was disposed on a knife edge support at the nearest
extremity in Fig. 3. 14. At its other end, in the zone under vision system observation, it was
disposed on a roller support. Both steel supports were disposed on reinforced concrete block and
the interface between steel supports and composite beam was made by means of rubber mats.
The bridge was loaded in its centre using four actuators anchored to the strong floor
(see Fig. 3. 14).
State-of-the-art report for JRA2
150
The first observation that can be made would be on the boundary conditions of the experiment.
More precisely, they are made on the strong floor that resists to the pull up of the actuators and
reacts to the push down of both concrete blocks.
3.5.5.2 Strong floor displacements
The Fig. 3. 15 shows the first right view of the run, with numbered points of interest. The sub-
windows on which tracking is made are coloured in green.
Points on the bridge, corresponding to LVDT attachment (1, 11, and 9) and one point on the shell
(41) used to get the origin along bridge axis (this point is 30 mm from the shell end on the right).
Points considered as fixed, corresponding to targets stuck on iron masses disposed on the ground
(13, 15, 17, and 19) or on the concrete block sustaining the bridge. One point (23) corresponds to
an iron mass disposed nearby the LVDT 26 attachment on the ground.
Indeed, all these points corresponds to stereo couples, thus the tracking has been done jointly on
the right and the left views. Note that the point 23 is in a very dark region (in Fig. 3. 15). The
vertical displacements obtained from 3D optical measurements are shown in Fig. 3. 16. The
ground has some vertical displacement (no more than 0.3 mm for point 23). The interesting fact
is the consistency of these curves. As points 13 and 15 are proximate to the zone of the floor
where the actuators have been anchored, the floor level is going up –pulled by the actuators’
anchors- as the bridge is pushed down at the same time (e.g. points 9, 11). In a few words, points
13 and 15 are in anti-phase with the bridge vertical displacement. In contrary, points 19, 25 and
23 are in phase with the bridge displacement: they are in the neighbourhoods of, or on the
concrete block supporting the bridge. The loading of the actuator is also applied to the floor
through the bridge and the block, thus the floor must go down in the vicinity of the support. Point
17 is more or less in a “neutral” position where uplift and down-lift motion compensate each
other.
Notice the high level of noise on point23, in contrast to a low level of noise in point 25, while
signals are almost identical (except for some discrepancy at the last cycle). The first point is in a
shadowed zone, as was previously said, in contrast with the second point that is on the concrete
State-of-the-art report for JRA2
151
block in a bright and contrasted zone. This illustrates the importance of having a good dynamic
for the camera. In the present case, it is possible to detect variation of less than 0.1 mm, even
0.05 mm (signal of point 15, first cycle). As the scale for a pixel is about 4 mm, this gives a
resolution of 1/80 pixel.
The slope of the floor can be evaluated from the 3D measurements, as the iron masses would
incline with the floor, the target longitudinal displacement on top of the iron masse divided by
the target distance to the ground would give an approximation of the floor angle. This is given
for points 13 and 17 in Fig. 3. 17. The variation of slope at the “neutral” point 17, behaving as
the oscillation node, is clearly put in evidence.
Fig. 3. 33 Right view of the beam, with some measurement points and LVDT available for comparison
State-of-the-art report for JRA2
152
Fig. 3. 34 Evidence of the floor displacement
State-of-the-art report for JRA2
153
Point 13 that is near by the actuator, is in anti-phase with point 9, on the bridge. Point 17 is
neutral while point 25 is in phase. See text for detail
Fig. 3. 35 Evolution of the slope of the floor at points 13 and 17
3.5.5.3 General drift of the beam
An important parasitic effect has been sensed by the vision system, as the bridge had an
unexpected drift. This drift could not be monitored with classical LVDT sensors linking a
“reference” point to the bridge, on the contrary, it polluted their measurements with spurious
effect. The photogrammetry permitted the correction of this parasitic effect, by providing the
drift on the LVDT points. The longitudinal and lateral drifts are shown in Fig. 3. 18, for points 1,
9, 11 and 41. The analysis of drift curves in Fig. 3. 18, taking into account the position of the
points on the bridge, reveals a mean uniform longitudinal drift, combined with a small rotation.
A more exhaustive investigation made on the general displacement of the slab has shown that
this rotation was of the order of 0.88 mrad.
State-of-the-art report for JRA2
154
It has been shown that not only a longitudinal drift (as large as 50 mm at most) took place, but
also a lateral drift of 10 mm, to which a slight rotation is superimposed.
Fig. 3. 36 Drifting of the bridge longitudinal to its axis (a) and perpendicular to it (b) for points 1, 41 9 and 11
3.5.5.4 Opening and sliding between slab and sandwich
Fig. 3. 37 right view of the concrete slab with targets indicated by red crosses. Cyan crosses correspond to sandwich and green ones to FRP
The connection between slab and beam is important to monitor since sliding and opening could
appear at their interface. This is constrained by the shear studs, and a minute observation of the
respective displacement of slab and sandwich panel is interesting. This study has been done on
the target network deposited on these two elements (Fig. 3. 19). First a comparison with classical
sensor measurements will be done, then profiles of sliding and opening will be exhibited, and
finally a qualitative analysis will corroborate the quantitative information.
State-of-the-art report for JRA2
155
(1) Comparison with LVDT 22
Fig. 3. 38 Left and right views of the LVDT 22. The profile of the lever is delineated on the left view
Fig. 3. 39 Signal of the LVDT 22, compared to distance between targets 77 and 569, on its extremities. The green curves corresponds to sliding as measured from target 417
State-of-the-art report for JRA2
156
The red rectangle in Fig. 3. 19 delineates the close-up views of LVDT22, which are presented in
Fig. 3. 20. These left and right views show the installation of the LVDT22, used to measure the
sliding between the FRP shell and the slab. A lever -indicated by a cyan line in the left view- was
anchored by its lower end to the FRP’s edge. LVDT22 joins the upper extremity of this lever –
point 569- to an anchorage to the slab –point 77-, this sensor appears as a faint shadow between
these two points, and its cable connection is clearly visible in front of the lever. As targets were
stuck on both extremities of the LVDT22, its length can be monitored with vision system. The
comparison between optical measurements (red line) and LVDT22 (black line) are shown in Fig.
3. 21. The maximum variation of length is 4 mm, approximately the mean pixel scale. The
agreement between both signals is quite good for the three first cycles, it is not so satisfying for
the two last ones. The noise on the optical signal is low, if one considers that this measurement
results from operations implying a difference between two 3D points. The pair of material points
that were followed shifted in space by 40 mm (for the first three cycles), which means that they
swept on both photography a zone of approximately 10 pixels, but the tracking kept its accuracy.
The green curve in Fig. 3. 21corresponds to the relative horizontal displacement between slab –
point 77- and FRP –point 417-, this can be considered as a first approximation to the sliding
between these two components as inclination to the horizontal of the bridge did not exceed 20
mrad (measurements given by inclinometers). While the successive maxima –corresponding to
end of cycles- are in good correlation between green and black curves, a large discrepancy
occurs in between. This shows that the LVDT22 sensor does not gives the pure sliding but a
composite effect of opening, sliding,and possible lever amplification effect.
(2) Sliding and opening obtained from optical method
To complement the LVDT22 measurements, the monitoring of points on the slab (lower line of
red crosses in Fig. 3. 19) and their corresponding points on the sandwich panel edge (line of cyan
crosses in Fig. 3. 19) was performed. As already noticed the inclination of the deck with respect
to its initial state did not exceed 20 mrad, thus a good approximation of sliding is given by the
horizontal displacement difference, and of opening by the vertical displacement difference. In
this way, profiles of sliding and opening can be measured. The results are shown in Fig. 3. 22, as
profiles at successive loading maxima. The two vertical lines on these figures materialise the
transition zone between the first (central) and second type of shear studs. For high loadings,
State-of-the-art report for JRA2
157
opening is high in the central zone (left of the vertical lines) whereas sliding slightly dominates
on the other zone. In the studs transition zone, sliding peaks are appearing for low loading and
opening peaks at high loading. Unfortunately, no texture was deposited on the edges of FRP and
sandwich panel, so that the profiles are based on a sparse number of points. It is noteworthy that
the same study made on profiles between FRP and slab gave the same behaviour. To illustrate
these observations, a qualitative study will be now presented.
Fig. 3. 40 In a is exhibited the sliding profile of the concrete slab with respect to sandwich panel, at the successive loading maxima. In b is shown the corresponding opening
(3) Qualitative observation of the sliding and opening
The window of interest (WOI) delineated in green in Fig. 3. 19 is shown in a close-up view in Fig. 3.
23 b, at the initial state: it is disposed in the middle of a) and c) for ease of comparison. Fig. 3. 23 a and
c represent the WOI at the instant of maximum load, but these two frames have been processed in a
different way. As history of displacement (in pixel) is known for all the targets, this WOI can be
followed as function of time. The top frame a) is the instantaneous follow-up of the initial zone, by
correcting for the mean translation of this zone. The slight tilting of the bridge can be observed. This
frame can be transformed so that the slab part is put in coincidence with its initial position. This is
exhibited in the bottom frame c). Left sliding of the sandwich and FRP elements can be seen with help
of the superimposed red grid. Opening is also easy to distinguish on the left part of the figure, and a gap
is indicated by a red arrow. The left limit of these frames correspond to the origin of profiles in Fig. 3.
22, and targets are approximately distant by 200 mm, so that the frames presented in Fig. 3. 23
correspond to the ‘central zone’ of the bridge.
State-of-the-art report for JRA2
158
Fig. 3. 41 Close up view of the green rectangle in Fig. 3. 9 (right view), for b) initial time, to be compared with a) and c). For c) the concrete slab has been registered to its initial state,
so that relative displacement of targets on Sandwich panel and FRP are evidenced
State-of-the-art report for JRA2
159
3.5.5.5 Shell buckling
The aim of this study was to put in evidence the occurrence of shell buckling. The effect of this
could be observed on some displacements monitored with the optical technique, that exhibited
discontinuities at time step 2070. Thus a zone of the beam was selected (see green rectangle in
Fig. 3. 9.c) on the initial –reference- right view, and the matching of each pixel of this reference
zone was done with the left and right views at time steps 1, 2069 and 2071. In this way, material
surfaces of the beam were obtained for the initial state and for time steps on each side of the
observed discontinuity. These material surfaces are structured sets of points in 3D space, each of
these points corresponding to the same material point of the reference state surface.
The surface of the reference state is almost planar within ±10 mm, it has random irregularities
superimposed on structural shape (e.g. the diaphragms impose their shape on the shell, while it is
“free” in between). The processing was made as follow:
- A best fitting “mean plane” was found for each of the three surfaces. To do that, some
parts of the selected zones were removed prior to the fitting, namely the upper and lower
parts of the beam where the profile is curved.
- These 3 planes were used as reference for each of the 3 surfaces, and a material point
corresponding to the lower right corner of the studied zone (see Fig. 3. 9.c) was chosen as
origin for each plane (it is the most proximate to the support). The intersection of the
laboratory horizontal plane with each “mean planes” permits to build a local reference
frame on each plane (with its normal completing it in 3D space).
- Considering a given surface, its “material points” were projected on the associated plane
and on the normal, respectively giving their in plane and out of plane coordinates.
- The use of the reference surface permits to compute the in plane and out of plane
displacements.
Perspective views of the meshes representing the same material points at time step 1 (black) and
2069 (red) are presented in Fig. 3. 24. The bending of the beam at step 2069 is easily seen. Some
bulge and declivity, relative to the reference state, are already present on the surface at time step
2069.
State-of-the-art report for JRA2
160
Fig. 3. 42 Perspective views of the surface of reference (black) and of its displacement at time step 2069 (red). A bulge and a declivity can be seen on the red surface, with respect to
the reference one
State-of-the-art report for JRA2
161
Fig. 3. 43 The difference between out of plane displacement for time steps 2071 and 2069 reveals the shell buckling
The difference between out of plane displacements at time steps 2071 and 2069, plotted on the
basis of the reference mesh, is shown in Fig. 3. 25. This reveals the buckling occurring at time
step 2070, with an amplitude of ±15 mm. As was mentioned previously, some bulge and
declivity zones pre-exist to the buckling, in fact this event corresponds to a sudden extension of
these zones. This effect may results from a sudden weakening of the link between diaphragm and
shell. Only vision system was able to fairly quantify this phenomenon.
3.5.6 On some real time displacement measurements
Some real-time measurements were done during a a sub-structured experiment on a damper (see
Fig. 3. 26). The damper is put in sandwich between the floor and a square plate with tensioned
Dividag at its four corners, so that its vertical load can be controlled. The horizontal loading is
made through an actuator, and control of displacement is made by Heidenhain along the actuator
axis. The camera is aimed at a target stuck on the actuator head, and frames of 400 x 200 pixels
are sampled synchronously with the basic frequency of the experiment, between 1 and 2 Hertz.
The target is tracked in real time with a normalised correlation program, and Heidenhain curves
are plotted together with optical results in real time (see Fig. 3. 27). The results are satisfying
and, as usual, more information can be extracted from the optical signal, as a transversal
displacement can be measured that could not be sensed with the Heidenhain (see Fig. 3. 28).
State-of-the-art report for JRA2
162
Fig. 3. 44 a) Experimental set-up, the actuator loading the damper is clearly visible on the right side of the photo. The camera on the left partially hide the damper in the back-
ground, that is vertically loaded by a square plate and 4 Dividags. b) A detail of the piston on which the tracked target is stuck
Fig. 3. 45 a) comparison of optical results (green) with Heidenhain (red) and Temposonics
(blue); b) difference between Heidenhain and optical methods
a) b)
State-of-the-art report for JRA2
163
Fig. 3. 46 a) longitudinal and lateral displacements; b) cycles
3.5.7 Shake table methodology: recent Research Efforts in using photogrammetry
The use of high speed cameras able to capture 200 frames per second in previous research (Fujita
et al. 2005) allows for an accurate capture of the movements of the shake table. The limited
maximum resolution of the cameras used (504x242) allows for a very small area to be monitored
by each pair of cameras in order to achieve the desired accuracy in X, Y and Z axes.
Furthermore, the use of custom luminous LED markers requires some possibly undesired
physical contact with the subject being tested.
A prototype with a camera rig consisting of four 640x480 cameras capable of capturing 5 frames
per second, with a future upgrade to a system capable of 500 FPS test the concept of
videogrammetry in the monitoring of civil engineering structures (Tait et al. 2007). The accuracy
of the calibrated system using signalized points was tested and found to be of the same order as a
simulated design for the system based on constraints of test object dimension and available
stand-off distance from the object. Synchronizing the cameras remains the biggest issue facing
State-of-the-art report for JRA2
164
the use of such a system. Nevertheless, the advantages to be derived from this non-contact, three
dimensional, full-field method have been shown to be possible to implement.
On another research, (Doerr et al. 2005) an image-based capture system consists of four high-
speed charged-couple-device (CCD) cameras, connected to a server style PC with extended
storage and networking capabilities. All cameras operate at a resolution of 658x494 pixels and
are capable of acquiring images at 80 frames per second. The camera synchronization problem
has been adequately solved with an appropriate software solution. The experimental evaluation
of the system demonstrates that the required data transfer capabilities can be achieved on a server
style PC and that commodity hardware is sufficient to acquire, archive and process sensor data in
real-time. Sample waveforms were extracted utilizing pixel-based algorithms applied to images
collected with the array of high speed, high-resolution charged-couple-device (CCD) cameras
and presented a reasonable match with data provided by traditional accelerometers.
On the other hand, videogrammetry is a useful tool for determining such deformations. As early
as in 1995 efforts have been underway for implementing stereoscopic video sequences for such
applications (Georgopoulos et al. 1995). An own developed stereoscopic system has served to
monitor, re-observe and measure the seismic experiment on the shake table (Georgopoulos &
Tournas 1999 and 2001, Tournas 1999).
3.5.8 Commercial Integrated Systems
Ready built commercial systems that fit the project’s needs do exist mainly as a hardware-
software bundle. However those products mostly aim at production industries, and are balanced
mostly towards accuracy versus sampling frequency, while at the same time they also raise the
cost far above the current project’s limitations. The most important and fitting to the present
project ones are presented in the following.
V-Stars/M by Geodetic Systems Inc, (http://www.geodetic.com/) is a 3D coordinate measuring
system that uses two or more INCA cameras to make fast, accurate, real-time measurements. The
V-STARS/M system is based on the single-camera V-STARS/S system. V-STARS/M is capable
of real-time measurements of targeted points or probes. The 3D data is reported to the system
Assuming that the seismic frequency range will be between 0-50Hz, a sampling rate of at least
100Hz is necessary to capture the changes in motion without aliasing according to the Nyquist’s
sampling theorem. Concerning the accuracy requirements, coordinate measurement accuracy in
the order of 1 mm is satisfactory enough in most cases.
The first important aspect that should be taken into account is the sensor technology that will be
selected. Two different technologies for capturing images digitally are currently available on the
market: CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor).
Each of them has unique strengths and weaknesses giving advantages in different applications:
• CCD sensors are more sensitive than CMOS and create high-quality, low-noise images. CMOS sensors are more susceptible to noise.
• CCD is much better for low contrast images. The light sensitivity of a CMOS sensor tends to be lower.
• CMOS have much lower power consumption. CCDs consume as much as 100 times more power than an equivalent CMOS sensor.
• CMOS are extremely inexpensive compared to CCD sensors. • CCD is more mature technology tending to have higher quality and more pixel resolution.
By examining the above differences, it is obvious that CCD sensors tend to be used in cameras
that focus on high-quality images with lots of pixels and excellent light sensitivity. This is the
case of photogrammetric measurements, where the quality of the images and the pixel resolution
play a predominant role.
When the cameras are used to capture a moving scene, the sharpness of a frozen image depends
on the technique used to render the video. Commercial video cameras usually create interlace
video images which are compatible to the common standards for transmitting video signals
between cameras and other devices such as TV monitors, video frame grabbers and video
players. Interlacing divides each image frame into odd and even rows and then alternately
refreshes them at 30 frames per second. The slight delay between odd and even row refreshes
creates some distortion or blurring. This is because only half the rows keep up with the moving
image while the other half waits to be refreshed. To avoid such blurred images a progressive
scan camera should be used. Progressive cameras read the entire entire image row by row within
the same scan and therefore no image blur is visible.
State-of-the-art report for JRA2
167
The quality of the frozen images in a video sequence is also affected by the synchronization of
the image row acquisitions. Some CMOS sensors operate in "rolling shutter" mode, that means
that the rows start, and stop, exposing at different times. This type of shutter is not suitable for
moving subjects because this time difference causes the image to smear. To avoid this problem a
"global shutter" mode should be available. In this mode the camera starts and stops exposure of
all image rows simultaneously. An example of an image taken using a rolling shutter is below is
shown in Erreur ! Source du renvoi introuvable.. For seismic table monitoring applications where
images of fast-moving objects without smear or distortion must be captured, the operation in
global shutter mode is a 'must have'.
Fig. 3. 47 Rolling Shutter and global shutter video capture
The accuracy of the coordinate measurements mainly depends on two parameters: image
resolution and distance from the object. Higher image resolutions result in higher measurement
accuracy for the same object distance. Closest distances to the object result in better accuracy for
the same image resolution. Assuming a distance from the object of about 4m, images at
1024x1024 pixel resolution (= 1 Megapixel) are sufficient to obtain coordinate measurement
accuracy in the order of 1mm in Z direction and 0.6 mm in X, Y directions.
According to the above mentioned parameters, two CCD progressive scan cameras with
1 Megapixel resolution at a frame rate of 100 frames per second (fps), operating in global shutter
mode are adequate to monitoring seismic-inducted motions in space. Such a camera setup
produces 200 Mbytes of raw image data per second that have to be transferred to a computer
system and stored for further processing. Since the amount of data produced by each camera
reach the 100 Mbytes/sec, a high-bandwidth data transfer protocol should be employed. There
are four data transfer protocols currently available on the market:
State-of-the-art report for JRA2
168
• IEEE-1394, (Firewire™), is a low-cost, high-bandwidth real-time data transfer standard.
It enables data transfer rates up to 50 M Bytes/sec. The new IEEE-1394b standard is a
high-speed revision of the original which allows for faster transfer rates of up to 800
Mbytes/sec.
• CameraLink™ is a high-speed data transfer protocol specifically designed for camera-
framegrabber interfacing. It significantly simplifies interconnection between camera and
framegrabber. CameraLink™ has a range of levels of compliance: base (300 Mbytes/sec),
medium (600 Mbytes/sec), and full (900 Mbytes/sec).
• USB-2 is the higher speed version of the USB interface commonly used to connect
computer peripherals. It enables transfer rates up to 60 Mbytes/sec.
• Gigabit Ethernet (GigE) is a high bandwidth development of the standard Ethernet
protocol used for PC and peripheral network connection. It enables transfer rates up to
125 Mbytes/sec. Using two Ethernet ports configured as a Link Aggregation Group
(LAG) on the same camera device a maximum data rate of 240 Mbytes/sec can be
obtained.
From the above mentioned protocols, IEEE-1394b, CameraLink and Gigabit Ethernet satisfy the
bandwidth requirements of 1MPixel cameras at 100 fps. Gigabit Ethernet is a very promising
solution that is employed by several companies, since it can be used to transfer large amounts of
data over long distances (up to 100 m). In addition, GigE cameras are reinforced with a packet
re-send mechanism than can eliminate the loss of transferred data. Furthermore, the overall cost
of a vision system can be reduced with these cameras, thanks to the availability of a variety of
low cost peripheral devices.
The problem with GigE cameras is that the Gigabit Ethernet may not always achieve its
125 MB/sec transfer rate. The problem is how the Gigabit Ethernet chip is connected to the
system. If it is connected to the standard PCI bus, it probably won’t achieve its full speed. PCI
bus works with a maximum transfer rate of 133 MB/s, while Gigabit Ethernet runs up to
125 MB/sec. By just observing these two numbers it seems that Gigabit Ethernet “fits” PCI bus.
The problem is that PCI bus is shared with several other components of the system, thus
lowering the available bandwidth. So, even though in theory Gigabit Ethernet can run fine on
PCI bus, it is just too close to the bandwidth limit of the bus. That is why a Link Aggregation
Group (LAG) connection is needed when transfer rates exceed 80 Mbytes/sec.
State-of-the-art report for JRA2
169
In order to store the acquired image sequences, a high speed storage device should be available.
Currently available hard disk drives offer very sort access times, resulting fast read and write
speeds. Revolutions Per Minute (RPM) is usually used to help determine the access time on hard
disk. RPM is a measurement of how many complete revolutions a computer’s hard disk drive
makes in a single minute. The higher the RPM, the faster data will be accessed. The highest
RPMs that are currently available on the market are 10000 and 15000. Hard disks at 10000 RPM
can write sequential files at an average speed of 100 Mbytes/sec, while hard disks at 15000 RPM
can write sequential files at an average speed of 125 Mbytes/sec. Taking into account that 100
Mbytes/sec have to be stored for each camera, two 15000 RPM hard disk drives have to be used,
one for each camera. Alternatively, a compression scheme may be used in order to reduce the
data volume before storage. In this case only one hard disk drive may be sufficient.
Another solution to the storage problem is the use of solid state disks (SSDs) instead of
commonly used hard disk drives. SSD devices don’t have mechanical parts. They offer
sequential write speeds in the order of 200 Mbytes/sec, but there are more expensive compared
to the corresponding hard disks. Another problem of the SSD drives is that their read/write
performance degrades over time. A SSD drive with write speed of 200 Mbytes/sec may fall to
185 Mbytes/sec in a short period of time. Thus, whether two SSD drives have to be used in the
same way as previously described, or a minute use of the Intel SSD Optimizer (http://download.
intel.com/support/ssdc/hpssd/sb/intel_ssd_optimizer_white_paper_rev_2.pdf) must be made.
3.5.10 Photogrammetric System Configuration
Based on the above, a system for 3D stereoscopic video capture was developed at the Laboratory
for Earthquake Engineering (LEE) of NTUA. The system uses two high resolution CCD cameras
connected to a PC with enhanced communication and storage capabilities. The main
characteristics of the cameras used include:
State-of-the-art report for JRA2
170
Brand name Prosilica GX1660 (monochrome) Manufacture Allied Vision Technologies ( http://www.alliedvisiontec.com ) Resolution 1600x1200 pixels Type CCD Progressive Sensor Size 2/3” Cell Size 5.5 μm Frame Rate 66 frames per second at full resolution Bit Depth 8/12 bits (monochrome) Interface IEEE 802.3 1000baseT Lens mount C Additional Features auto exposure, auto gain, auto white balance, pixel binning,
region of interest readout, asynchronous external trigger and sync I/O, global shutter, video-type auto iris
Since the GX1660 cameras offered without lenses, two Fujinon HF12.5SA-1 (1:1.4/12.5mm)
lenses were attached to the camera devices.
Each GX1660 camera has two screw-captivated Gigabit Ethernet ports configured as a Link
Aggregation Group (LAG) to provide a sustained maximum data rate of 240 MB per second.
Two dual port network cards are used to connect the two cameras to the computer motherboard.
The storage system includes two Solid State Disks (SSDs) at 285 MB/sec write speed and 480
GB capacity in total. The employed computer system consists of the following components:
Motherboard Asus P7P55 WS SuperComputer Processor Intel® Core™ i5 CPU 760 @ 2.80GHz Memory (RAM) 4 GB Graphics Card nVidia Quadro FX 380 Storage 1 Seagate ST31000528AS, 1TB
2 Solid State Corsair Force 240GB Networking 2 Intel Pro/1000PT Dual Port (EXPI9402PT) Operating System Windows 7 Pro 64-bit
The two cameras are placed on a solid aluminum bar at a distance of about 0.70 m to each other.
A small 8” thatch screen device was also placed in the middle of the bar, allowing the control of
the two cameras without the need of additional input devices. The system configuration is shown
Fig. 3. 48 Configuration of vision system developed at LEE/NTUA
3.5.11 Software development
In order to monitor seismic motions in 3D space specialized software for camera
synchronization, image acquisition and time-stamping is needed. In addition, computer vision
software for camera calibration, exterior orientation, target tracking and photogrammetric
triangulation must be also available.
3.5.11.1 Stereoscopic video capture
Stereoscopic video capture from two independent cameras can be accomplished by using the
internal CPU clock of the Intel processor. Each incoming frame is time stamped when it is
transferred from the camera to the computer memory. The synchronization accuracy that can be
achieved varies between 0 and 1000/fps milliseconds. In case of a GX1660 camera the
synchronization accuracy may be between 0 and 16 msec. The video capture is driven by two
separate threading processes, one for each camera. The incoming frames are temporally stored in
a cyclic buffer and then transferred to the computer storage unit. The size of data arriving is
about 121 MB/sec (66 fps, 1600x1200 pixels, mono, uncompressed) for each camera. To
successfully save the incoming data 121 MB/sec disk throughput is necessary for each camera.
State-of-the-art report for JRA2
172
The software for stereoscopic video capture that was developed at LEE/NTUA is shown in
Fig. 3. 31. The two video streams from the left and right camera of the stereo ring are shown side
by side. The functions available to the user include:
Project definition
Each project has a unique name and may have several video streams. Each video stream is stored in a separate binary file. The name of a video stream is automatically generated by the software with extension “*.frm”. Video streams acquired simultaneously from the left and right camera have the same name (stored in different directories).
Connect Checking for the availability of the cameras and start streaming. The incoming images are shown at 25 fps, even if the acquisition frame rate is higher.
Disconnect Stop video streaming and close the cameras. Auto exposure Auto exposure control (ON/OFF) Auto gain Auto gain control (ON/OFF) – only for color
cameras White balance Auto white balance control (ON/OFF) – only for
color cameras Actual pixels By default, the video is show at 400x300 pixels
resolution. When actual pixels are activated only a small part of the center of the frame is shown at 1-by-1 screen resolution.
Large frames Change to 800x600 pixels resolution. Reset camera settings Auto exposure, gain and white balance control set
back to their initial values. Snap Take a single frame simultaneously from both
cameras. The names of the images are automatically generated. The images are saved in JPEG format.
Capture Start / Stop video capture.
State-of-the-art report for JRA2
173
Fig. 3. 49 Software for stereoscopic video capture developed at LEE/NTUA
Fig. 3. 50 Stereoscopic video play-back of the system developed at LEE/NTUA
3.5.11.2 Stereoscopic video play-back
The stereoscopic video play-back application that was developed at LEE/NTUA is shown in
Fig. 3. 32. The user selects a project file (created by stereoscopic video capture application) and
the first video stream is displayed on the screen. Left and right video images are shown side-by-
side. The current number of frame is shown in the bottom right corner in red. The time stamping
of the frame is shown in the bottom right corner in yellow. Since a project file may contain more
State-of-the-art report for JRA2
174
than one video stream, a drop down menu for video file selection is available from the main
menu. Play back, stop, step-forward and step-backward functionality is also included.
3.5.11.3 Camera calibration
Photogrammetry provides a variety of methods for determining the interior and exterior
orientation parameters of a camera, relating image measurements to scene coordinates of an
appropriate calibration field. Due to the high accuracy demands of the application the use of an
accurate calibration field is critical. Such a calibration field should satisfy two basic
requirements: automatic target recognition using well known image processing techniques and
absolute accuracy better than 1/5 mm.
In the implementation at LEE/NTUA, a calibration plate of a form of chessboard is used. The
calibration method adopted has been proposed by Zhang and is described in detail in Zhang
(1999). The calibration pattern consists of 9 x 12 black and white squares with cell size of 65x65
mm. Several photographs of the calibration plate were acquired from different positions and
orientations with a constant tilt angle of 45o (Fig. 3. 33). About 40 images were actually used for
camera calibration. The internal corners of the chessboard pattern were automatically identified
with sub-pixel accuracy and used as input measures for the calibration procedure. As a result the
extrinsic and intrinsic parameters of the camera calibration are estimated.
.
Fig. 3. 51 Indicative camera positions for camera calibration
State-of-the-art report for JRA2
175
Fig. 3. 52 Camera calibration software developed at LEE/NTUA
The software for camera calibration that was developed at LEE/NTUA is shown in Fig. 3. 34.
The main functions include automatic chessboard identification, coordinate measurement,
interior and exterior orientation. The software development is on the top of OpenCV 2.2, an open
library of programming functions for real time computer vision. According to the OpenCV
camera model, a scene view is formed by projecting 3D points into the image plane using the
following perspective transformation:
MtRAms ′=′ ]|[
(1) or
=
1100
00
1 3333231
2232221
1131211
ZYX
trrrtrrrtrrr
cfcf
vu
s yy
xx
(2)
State-of-the-art report for JRA2
176
where (X,Y,Z) are the coordinates of a 3D point in the world coordinate space, (u,v) are the
coordinates of the projection point in pixels; A is called a camera matrix, or a matrix of intrinsic
parameters; (cx,cy) is the principal point (that is usually at the image center); fx,fy are the focal
lengths in x,y direction expressed in pixel-related units. The joint rotation-translation matrix [R|t]
is called a matrix of extrinsic parameters. The transformation above is equivalent to the
following (when z ≠ 0):
yy
xx
cyfvcxfu
zyyzxx
tzYX
Rzyx
+′⋅=+′⋅=
=′=′
+
=
//
(3)
which is the well known co-linearity equation used in Photogrammetry. Real lenses usually have
some distortion, mostly radial distortion and slight tangential distortion. So, the above model is
extended as:
yy
xx
cyfvcxfu
yxrwhereyxpyrprkrkyy
xrpyxprkrkxxzyyzxx
tzYX
Rzyx
+′′⋅=+′′⋅=
′+′=
′′+′++++′=′′
′++′′+++′=′′
=′=′
+
=
2222
221
42
21
2221
42
21
2)2()1(
)2(2)1(//
(4)
where k1, k2, are radial distortion coefficients and p1, p2 are tangential distortion coefficients.
The estimated calibration parameters for the two GX1660 cameras used are:
3.5.12.4 Using the stereovision system during shaking table tests: drums stacked on AZALEE table
A first experimental application of the stereovision system has been carried out on a seismic
qualification testing campaign. Different configurations of standard drums stacked on pallets
have been put on AZALEE table in order to check their stability under different seismic
loadings.
AZALEE is the 6D shaking table of EMSI laboratory. It is 6 x 6 m2 and can support 100-ton mock up. It is activated by 8 1000 kN servo hydraulic actuators (4 in horizontal plane, 4 vertical). 4 pneumatic static supports are under the table to compensate part of the mass of the table + mock up system. The maximum displacements are +/- 125 mm in horizontal plane and +/- 100 mm in vertical direction.
The different drums stacks have been put on a concrete floor (3 x 3 x 0,2 m) covered with an epoxy coating (see figure after) as in the real industrial building which shall receive the drums. The stacks have been submitted to representative seismic signals.
Fig. 3. 66 Concrete floor with epoxy coating Fig. 3. 67 Drums stack on AZALEE table
(top view)
Drums stack Concrete floor (fitted on table by 8 bolts)
State-of-the-art report for JRA2
190
The different testing configurations are summarized in next table.
Drums stacks tests configurations Drum type Pallet number Stack type 100 litres
3 5 drums on each pallet
2 2 x 5 drums’ pallets, 1 x 4 drums’ pallet on top
200 litres
3 4 drums on each pallet
200 litres
3 5 drums on each pallet
2 4 drums on each pallet
24 tests have been performed, considering all the stacks configurations.
State-of-the-art report for JRA2
191
Set of accelerograms spectrums calculated (in grey) from theoretical spectrum (in green)
X direction Y direction
Z direction
Fig. 3. 68 Examples of accelerograms for drums stacks seismic tests
The main goals of the study were:
• First, to check that no drum is falling from the stack during seism.
• Second, to measure the maximum displacements of the drums during seism.
To protect testing device some metallic structures have been put around the stacks to prevent an
accidental collapse of the stack but at some distance in order not to interfere with the drums
during test. Due to the metallic beams size and number, the field view is quite reduced for the
cameras of VIDEOMETRIC system (Fig. 3. 51).
State-of-the-art report for JRA2
192
Fig. 3. 69 A typical 3 pallets and 3x4 drums on AZALEE
The whole mock up (shaking table, concrete floor, base pallet, top drums) have been
instrumented with various sensors (Fig. 3. 52):
• Actuators displacement sensors.
• Shaking table accelerometers.
• 2 cable displacement sensor on one top drum.
• Several targets for stereovision system, each of them permits the displacement
measurements in each axis X, Y, Z.
The conventional sensors are electrically processed and acquired by the PACIFIC
INSTRUMENT system (PI 660-6000) for conventional sensors and the stereovision pictures are
monitored and acquired by VIDEOMETRIC VDM-3D Acquisition module. These 2 systems
have a 100 Hz acquisition rate. The post process of stereovision pictures is carried out with
VIDEOMETRIC VDM-3D Analyser module.
AZALEE table
Concrete floor Metallic protection structure
Pallet
Drum
State-of-the-art report for JRA2
193
Fig. 3. 70 Drums stacks testing instrumentation
3 accelerometers (3D) on table next to
concrete floor 2 cable sensors on one top drum
Fig. 3. 71 Instrumentation implementation on drums stacks
More specifically, the ‘instrumentation’ for the stereovision system consists of targets fixed on
each part of the mock up as follows:
Concrete floor
x z
y
a
b c
d
e
a
d
c
c
d a
Floor 1
Floor 2
Floor 3
2 cable sensors
Accelerometers for 3D table control
AZALEE shaking table
Stereovision system
State-of-the-art report for JRA2
194
• 4 targets on the concrete floor. These are used to define the reference origin and axis. 3
are on ‘y’ axis and 2 on the ‘x’ one. The concrete floor is supposed to be perfectly linked
to the table.
• 1 or 2 targets on pallets depending on the field of view. These are put on mechanical parts
attached to the pallets in order to be seen by the cameras through the metallic structure.
• 1 on each top drum.
Fig. 3. 72 VIDEOMETRIC targets fixe on mock up
Colors as follows:
• Red targets reference axis on concrete floor
• Green targets pallet #1
• Blue targets pallet #2
• Magenta targets pallet #3
• Multicolored targets top drums
For all tests, no drum has fallen, that means stacks have remained globally in shape. The
maximum displacement of a top drum has been between 4.6 mm and 22.1 mm depending on
stack configuration. This last result has been directly post process from targets displacement
through VDM-3D Analyser module.
Ox
Oz Oy
State-of-the-art report for JRA2
195
Some comparisons have been carried out between the conventional displacement sensors and the
targets analysis. Overall comparisons are very good in both directions but noise is more
important in X axis.
The next 2 graphs show the comparisons between the LVDT sensors of the table horizontal
actuators and the relative displacement of the VIDEOMETRIC object (made with 4 targets on
the concrete floor) with the stereovision arm in reference.
Along Ox
Along Oy
Fig. 3. 73 Comparisons of VIDEOMETRIC and LVDT sensors measurements for shaking table
Comparisons between cable sensors on top drum and stereovision measurements (1 target) are
quite accurate too in both axis but noise is more important in Ox direction.
State-of-the-art report for JRA2
196
Displacement (mm) along Ox
Displacement (mm) along Oy
Fig. 3. 74 Comparisons of VIDEOMETRIC and LVDT sensors measurements for top drum
Measurement noise is greater on Ox axis because, there are only 2 targets to define Ox axis as a
reference whereas there are 3 targets for Oy axis (Fig. 3. 54), in other words Ox is geometrically
defined with less accuracy. This implies that there are more geometrical calculation errors in the
positioning of the other targets versus Ox axis than Oy axis, and so more measurement noise.
In the Ox case, measurement inaccuracy is about the maximum possible error determined during
still test (±0,085 mm).
Other displacement measurements with stereovision system are interesting to see the coherence
of the results.
Pallets displacements show that they increase with their height (pallet #1 is the bottom one and
#3 is the top one). This is qualitatively quite a logical result.
State-of-the-art report for JRA2
197
Pallets displacements along Ox
Pallets displacements along Oy
Fig. 3. 75 VIDEOMETRIC measurements for pallets
We can also compare the displacement of the 5 top drums. The measured displacements are well
in phase together. Once again qualitatively the results are quite relevant.
State-of-the-art report for JRA2
198
Top drums displacements along Ox
Top drums displacements along Oy
Fig. 3. 76 VIDEOMETRIC measurements for top drums
3.5.12.5 Conclusions
These different tests and results show that the stereovision system provided by the company
VIDEOMETRIC fulfils TAMARIS experimental needs. Some comments can be outlined from
these first experiments:
• This measurement technique is with no contact, that is with no interaction between
specimen and sensor.
• This is a robust displacement measurement technique to use in a test hall with triggering
capability (synchronization with other process) and dedicated lighting control.
• It is quite easy to implement and to use but care must be paid to the fulfilment of the
targets positioning, the choice of the reference axis and origin, the choice of different
State-of-the-art report for JRA2
199
independent objects if necessary. A ‘directions of use’ document must be written down
for the lab.
• The accuracy, in optimum implementation, is very good: 1/100th pixel.
• The acquisition frequency of 90 Hz is good enough for seismic testing.
• Analyzes are performed after test as many times as necessary.
In TAMARIS, next steps will be to use the same device on other test rigs on AZALEE table:
• Metallic structures.
• Concrete building.
Another system derived from this stereovision one will be provided by Videometric. It is based
on 3D stereo correlation technique and will be dedicated to the measurement of 3D displacement
maps on reduced area of concrete buildings, for instance. This system will be received, tested
and checked during 2011.
3.5.13 Conclusion
A review of the available vision sensors has been made, that has put in evidence the riches of this
technology, and its fast development, as it is linked to the prosperous branches of electronics
“submitted” to the well known Moore law. Indeed, the vision sensor will offer higher and higher
resolution, with decreasing pixel size. Their SNR will be probably kept at a fair level, giving a
true 12 bit output for medium quality CMOS sensors, and at frequency of frame sampling better
than 100 Hertz. The high end scientific CCD will have a very good SNR giving 14 to 16 bit
output with quite low sampling frequency, while scientific CMOS sensors are apparently on their
rise and should offer the same quality as CCD but at a higher frequency. This review has been
restricted to sensors working at frequency of the order of 100 hertz at most.
It has been also shown that, given a meticulous calibration of a stereo rig, it was possible to
extract important information on the behaviour of a large structure. The field of view was 6 m on
the bridge, and the pixel scale was varying from 3.6 to 4.4 mm on the frames, but it was possible
to clearly see vertical movement of the order of a tenth of a mm. Importance of vision system has
been demonstrated for checking boundary condition, for correcting unpredictable phenomenon
State-of-the-art report for JRA2
200
and indeed to give map of measurements, that would not be attainable with classical 2 points
distance sensors.
3.6 Stress and strain visualisation using thermal imaging
Thermal imaging provides a straightforward way to visualise stress distributions in metallic
elements deformed into their plastic range, and can provide more detailed information than
discrete sensors. There are many thermal imaging cameras available on the market; well-known
manufacturers include Agema and Flir. Using such a camera, is relatively straightforward to
obtain sequences of images of the temperature distribution across a surface, such as in Fig. 1.
Hotspots can be clearly seen, giving an excellent visual indication of areas of high plasticity and
incipient failure.
Fig. 3. 77 Thermal images from a fatigue test to failure on a yielding shear panel dissipative device
However, converting these to accurate numerical values of temperature, energy, stress and strain
requires considerable care and analysis. This section provides a brief summary of the steps
involved, and the potential pitfalls, based on experiments at Oxford University. The key stages
are:
1. Calibration of temperature data.
2. Transformation of images for deformed specimen to a fixed reference frame.
State-of-the-art report for JRA2
201
3. Conversion of temperatures to energy densities.
4. Conversion of energy density to stress and strain.
3.6.1 Calibration of temperature data
Thermal cameras measure infra-red radiation from surfaces within their field of vision, and
convert this to a temperature distribution. Typical cameras are able to scan at rates of the order of
tens of Hz and advertised accuracies are typically of the order of 1% or 1°C. Unfortunately, the
temperature measured from a surface is not necessarily equal to the actual temperature of the
surface. Accuracy can be affected by:
Reflections: a cool but reflective surface close to a heat source will show a high temperature due
to reflected infra-red radiation. This should be avoided by careful design of the test set-up as far
as possible.
Emissivity: different surface finishes radiate heat at different rates, and the thermal imaging data
need to be calibrated to account for this. In our experiments this was done by scaling the
camera’s temperature values by an emissivity ratio between 0 and 1. Appropriate values of the
ratio were chosen by comparing camera data with direct temperature sensors attached at discrete
points in calibration tests. As a general rule, dull, dark surfaces have high emissivity (ratio close
to 1) and shiny or polished surfaces have lower emissivity. There is an obvious benefit in having
a specimen with a uniform surface finish, so that emissivity scaling does not need to be varied
over the sample.
Many evaluation tests were performed on steel samples, often with a lightly oxidised surface;
this gave an emissivity ratio close to 1 for our camera system. However, at large plastic
deformations, parts of the oxidised surface sometimes flaked away, leaving a shinier surface with
lower emissivity. This then appeared as an apparent cold spot in the thermal imaging, which
needed to be ignored or adjusted.
State-of-the-art report for JRA2
202
3.6.2 Transformation of images to a fixed reference frame
Since, in a dynamic test, the test specimen is moving and deforming, successive thermal images
cannot simply be superposed. Instead it is necessary to track the movement of a point in the
specimen in order to extract the time variation of its thermal energy from the images. This is
most easily done by applying a transform to images of the deformed specimen so as to map each
point back to its initial, undeformed position. Once the images have been transformed in this
way, a point in the specimen can be assumed to lie at the same co-ordinates in all the images,
greatly simplifying the subsequent processing.
To perform the transformation, it is necessary to track the motion of a set of key points on the
specimen, and then apply an appropriate order of interpolation between them. In many instances
the specimen will include some suitable points. For example, in Fig. 2a) – c), tests were
performed on steel dissipators with webs perforated by circular holes; it was a simple matter to
track the position of the centre of each circle. A suitable interpolation function could then be
fitted to these points. Fig. 2d) – f) show thermal images for specimens with fewer obvious
features. In these tests, easily identifiable dark spots were introduced in the form of small
rectangular rubber pads, which were refrigerated until the test was ready and then attached to the
specimen. The choice of transformation method may vary depending on the complexity of the
specimen and its deformation pattern. For simple geometries, linear interpolation is likely to be
sufficiently accurate. For more complex deformations, more generally formulated tracking
strategies developed in other field of image analysis can be used (e.g. Marias et al., 2000).
State-of-the-art report for JRA2
203
Fig. 3. 78 Thermal images from tests on short beam sections
Images a) – c) are for beams weakened by perforating the webs with circular holes. Images d) –
f) are for I-section beams with stiffeners; the black rectangles are rubber pads. The circled area in
e) indicates where the surface has flaked off, giving an apparent change in temperature (Clement,
2002).
3.6.3 Conversion of temperatures to energy densities
A simple mathematical process can be followed to convert temperature distributions to plastic
energies. By manipulation of the heat diffusion equation, the power density p (i.e. energy
released per unit volume per second) can be related to the temperature u by:
State-of-the-art report for JRA2
204
(1)
where cp is specific heat, ρ is density and k thermal conductivity. If the temperature changes by
∆u over a timestep ∆t then the increment of energy per unit volume is
(2)
If the temperature is known over a regular 2D grid of points (i, j) spaced at ∆x in each direction,
the Laplacian is given by:
(3)
In the case of a thermal imaging camera, the grid points are represented by the pixel centres.
3.6.4 Conversion of energy density to stress and strain
Once a distribution of plastic energy has been achieved, this can be converted to stress and strain
distributions by applying plasticity theory. The energy density can be expressed in terms of the
stress σ and the plastic strain εp according to
(4)
To relate the stress to the plastic strain requires the definition of a yield criterion, a hardening
rule describing the work hardening of the material and flow rule relating the plastic strain
increment to the yield surface. For steel it is reasonable to use a von Mises yield surface with a
normal flow rule, in which the direction of the strain increment is always normal to the yield
surface. An example result is shown in Fig. 3, which shows two snapshots of plastic strain
distributions for the same element as pictured in Fig. 2e) and f).
State-of-the-art report for JRA2
205
Fig. 3. 79 Plastic strain distributions deduced from thermal images for the beam pictured in Fig. 3. 60
3.6.5 Conclusion
Thermal imaging has been shown to provide a viable way of developing visualisations of stress
and strain fields in metallic specimens during dynamic tests. However, the technique requires
careful implementation and remains prone to some uncertainty. The tests described here could be
improved by improved surface preparation of specimens, and more thorough calibration of their
thermal emissivity.
State-of-the-art report for JRA2
206
4 Summary
This report covered the research activities both of Task JRA2.1 and of Task JRA2.2,
respectively. In greater detail the state of the art as well as the implementation and application of
new types of sensors, time-integration and control techniques, visualisation and device modelling
tools capable of enhancing the measurement of the response of test specimens and of improving
the quality of test control were summarized.
To achieve the objectives of the aforementioned tasks, selected partners made extensive
use of testing and calibration of instrumented specimens. In particular, the following test types,
with relevant specimens were employed:
– test Type 1 (TT1): a testing bench comprising four – instead of two - electro-
magnetic actuators designed to control 4- instead of 2-DoF linear/non-linear systems with or
without substructuring.
– Test Type 2 (TT2): an actuator calibration bench including a 2.5 kN hydraulic
actuator with 2 servo valves, a steel table mounted on a low friction ball bearing rail, a real-time
hybrid controller with a fiber optic communication system.
Dissemination of time-integration techniques, control techniques and vision systems to
partner infrastructures not directly involved in the above-mentioned development and/or
application will follow.
State-of-the-art report for JRA2
207
References
Adam Pascale. 2009. Using Micro-ElectroMechanical Systems (MEMS) accelerometers for earthquake monitoring, Environmental Systems & Services Pty Ltd, www.esands.com
Ahmadizadeh M. Real-time Hybrid Simulation Procedures for Reliable Structural Performance Testing. PhD thesis, State University of New York at Buffalo, 2007.
Ahmadizadeh M., Mosqueda G. 2009. Online energy-based error indicator for the assessment of numerical and experimental errors in a hybrid simulation. Engineering structures, 31(2009): 1987-1996.
Allen, D. W., 2004, “Software for Manipulating and Embedding Data Interrogation Algorithms into Integrated Systems,” M.S. Thesis, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA.
Alleyne A. and R. Liu. A simplified approach to force control for electro-hydraulic systems. Control Engineering Practice, 8:1347-1356, May 2000.
Andrew T. Zimmerman and Jerome P. Lynch (2006). Data Driven Model Updating using Wireless Sensor Networks. Proceedings of the 3rd Annual ANCRiSST Workshop, Lake Tahoe, CA, May 29-30, 2006.
Anthoine, A., Capéran, Ph., 2008, “D 8.3 Earthquake tests and analysis of the experimental results”, ESECMaSE (Enhanced Safety and Efficient Construction of Masonry Structures in Europe), Technical Report, Ispra, Italy.
Aoki, S., Fujino, Y., and Abe, M. (2003). “Intelligent Bridge Maintenance System Using MEMS and Network Technology,” in Smart Systems and NDE for Civil Infrastructures, San Diego, CA, March 3–6, Proceedings of the SPIE, Vol. 5057, 37–42.
Basheer, M. R., Rao, V., and Derriso, M., 2003, “Self-organizing Wireless Sensor Networks for Structural Health Monitoring,” in Proceedings of the 4th International Workshop on Structural Health Monitoring, Stanford, CA, September 15–17, 1193–1206.
Bayer V., Dorka U.E., Füllekrug U., Gschwilm J.. 2005. On Real-time Pseudo-dynamic Sub-structure Testing: Algorithm, Numerical and Experimental Results, Aerospace Science and Technology. 9: 223-232
Benefits of 3D Integration in Digital Imaging Applications, http://www.ziptronix.com/images/pdf/imaging.pdf Bennett, R., Hayes-Gill, B., Crowe, J. A., Armitage, R., Rodgers, D., and Hendroff, A., 1999, “Wireless Monitoring
of Highways,” in Smart Systems for Bridges, Structures, and Highways, Newport Beach, CA, March 1–2, Proceedings of the SPIE, Vol. 3671, 173–182.
Binns, J. (2004). “Bridge Sensor System Delivers Results Quickly,” Civil Engineering, Vol. 74, No. 9, 30–31. Bonelli A, Bursi OS, He L., Magonette G. and Pegon P. 2008. Convergence Analysis of a Parallel Interfield Method
for Heterogeneous Simulatioins with Dynamic Substructuring. International Journal for Numerical Methods in Engineering. 75(7): 800-825.
Bonnet P.A., Lim C.N. Williams M.S. Blakeborough A. et al. 2007. Real-time hybrid experiments with Newmark integration, MCSmd outer-loop control and multi-tasking strategies. Earthquake engineering and structural dynamics. 36: 119-141.
State-of-the-art report for JRA2
208
Bursi, O. S., Gonzalez-Buelga, A., Vulcan, L. and Wagg, D. J. 2008. Novel coupling Rosenbrock-based algorithms for real-time dynamic substructure testing. Earthquake Engineering and Structural Dynamics, 37(3):339-360.
Bursi O. S. and Wagg D.. Editors. 2008. Modern testing techniques for structural systems - Dynamics and control. Springer Wien New York.
Bursi O. S., Chuanguo Jia and Zhen Wang. 2009a. Monolithic and partitioned L-stable Rosenbrock Methods for Dynamic Substructure tests. 3rd international conference on adcances in experimental structural engineering. Oct. 15-16, San Francisco, USA.
Bursi O. S., He L, Boneli A, Pegon P. 2009b. Novel Generalized-a Methods for Interfield Parallel Integration of Heterogeneous Structural Dynamic Systems. Journal of Computational and Applied Mathematics (in press)
Bursi O. S., Tondini Nicola, Bonelli Alessio, Franceschetti Marco, Prodomi Nicola. 2009c. Workpackage No. WP6: Laboratory Evaluation of the Performance of the Condition Monitoring System and the Predictive Capability of the Theory of Seismic Failure. MONICO/UNITN/WP7/Data Elaboration.doc.
Bursi O. S., Chuanguo Jia and Zhen Wang. 2011. Novel partitioned time integration methods and actuator dynamics compensation techniques for real time heterogeneous testing. 4th International Conference on Advances in experimental structural engineering. June 29-30, in Ispra, VA, Italy.
Bursi O. S., Tondini Nicola, Bonelli Alessio, Franceschetti Marco, Francescotti Stefano, Prodomi Nicola. 2011. Report on the whole simulation tests and the calibration and validation of the theory of Seismic Failure. Deliverable D6.1. MONICO/UNITN/WP7
C.U. Grosse and M. Kroger. 2006. Inspection and Monitoring of Structures in Civil Engineering, NDT.net, Vol. 11, No. 1, Jan.
Camacho E.F. and Bordons C. 2003. Model predictive control (Second Edition). Springer. Capéran, Ph., 2007a. Displacement and Strain Field Photogrammetric Measurements of a Reinforced Concrete Slab
Submitted to an Earthquakes Loading. In Conference Proceedings: P. Antoine, editor. Proceedings of the 3rd Workshop on Optical Measurement Techniques for Structures and Systems (OPTIMESS). Brussels (Belgium): OPTIMESS Scientific Research Network supported by the Fund for Scientific Research, Flanders; 2007. p. 1-8.
Capéran, Ph. 2007b. A New Tracking Algorithm with Application to a Practical Measurement Case. 8th conference on Optical 3-D Measurement Techniques. Zurich, Switzerland.
Casciati, F., Faravelli, L., Borghetti, F., and Fornasari, A., 2003, “Tuning the Frequency Band of a Wireless Sensor Network,” in Proceedings of the 4th International Workshop on Structural Health Monitoring, Stanford, CA, September 15–17, 1185–1192.
Casciati, F., Casciati, S., Faravelli, L., and Rossi, R., 2004, “Hybrid Wireless Sensor Network,” in Smart structures and Materials: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, San Diego, CA, March 15–18, Proceedings of the SPIE, Vol. 5391, 308–313.
Chang S.Y.. 2002. Explicit pseudodynamic algorithm with unconditional stability. Journal of Engineering Mechanics. 128(9): 935-947.
Cheah, C. C., Liu, C. & Slotine, J. J. E. 2006. Adaptive tracking control for robots with unknown kinematic and dynamic properties. The International Journal of Robotics Research 25, No 3, 283-296.
Chen, C., Ricles, J.M. 2008a. Development of direct integration algorithms for structural dynamics using discrete control theory. Journal of Engineering Mechanics (ASCE), 134(8), 676-683.
Chen C, Ricles J.M. 2008b. Stability analysis of SDOF real-time hybrid testing systems with explicit integration algorithms and actuator delay. Earthquake Engineering and Structural Dynamics, 37(4), 597-613.
Chen, C., Ricles, J.M. 2009. Improving the inverse compensation method for real-time hybrid simulation through a dual compensation scheme. Earthquake Engineering and Structural Dynamics. Online: www.interscience.wiley.com (DOI: 10.1002 /eqe. 904).
State-of-the-art report for JRA2
209
Chen, C., Ricles, J.M., Marullo, T.M., Mercan, O. 2009. Real-time hybrid testing using an unconditionally stable explicit integration algorithm. Earthquake Engineering and Structural Dynamics, 38(1), 23-44.
Christine Connolly.March 2009. Structural Monitoring with Fibre Optics, UK Correspondent. Chung, H-C., Enomoto, T., Shinozuka, M., Chou, P., Park, C., Yokoi, I., and Morishita, S. (2004a). “Real-time
Visualization of Structural Response with Wireless MEMS Sensors,” in Proceedings of the 13th World Conference on Earthquake Engineering, Vancouver, BC, Canada, August 2–6.
Chung, H-C., Enotomo, T., Loh, K., and Shinozuka, M. (2004b). “Real-time Visualization of Bridge Structural Response Through Wireless MEMS Sensors,” in Testing, Reliability, and Application of Microand Nano-Material Systems II, San Diego, CA, March 15–17, Proceedingsof SPIE, Vol. 5392, 239–246.
Clement D.E. (2002) Seismic analysis of knee elements for steel frames. D.Phil. Thesis, University of Oxford. Conte J. P. and T. L. Trombetti. Linear dynamic modeling of a uni-axial servo-hydraulic shaking table system.
Earthquake Engineering and Structural Dynamics, 29:1375-1404, 2000. Darby A.P., Blakeborough A., Williams M.S.. 2001. Improved Control Algorithm for Real-time Substructure
Testing. Earthquake Engineering and Structural Dynamics. 30: 431-448 Doerr K.-U., Hutchinson T. C., Kuester F., 2005. A methodology for image-based tracking of seismic-induced
motions Smart sensor technology and measurement systems. Smart sensor technology and measurement systems. SPIE Conference, San Diego CA, 2005, SPIE Proceedings, vol. 5758, pp. 321-332, ISBN 0-8194-5739-6.
El Gamal A. and Eltoukhy H., "CMOS Image Sensors" IEEE Circuits and Devices Magazine, Vol. 21. Issue 3, May-June 2005.
Farrar, C. R., Allen, D. W., Ball, S., Masquelier, M. P., and Park, G., 2005, “Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring,” in Proceedings of the 6th International Symposium on Dynamic Problems of Mechanics (DINAME), Ouro Preto, Brazil, February 29–March 4.
Francois LeBlanc. 2003. THE OSMOS FIBER OPTIC MONITORING SYSTEM, Field Trip Report, SubTerra Inc. - Osmos Monitoring System, Seattle, WA, June.
Fujita, S., Furuya, O., Niitsu, Y., Mikoshiba, T., 2005. Research and development of three dimensional Measurement technique for shake table test using Image processing. 18th International Conference on Structural Mechanics in Reactor Technology (SMiRT 18), Beijing, China, August 7-12, 2005, SMiRT18-K12-6.
Gadre, D. V. (2001). Programming and Customizing the AVR Microcontroller. McGraw-Hill, New York. Galbreath, J. H., Townsend, C. P., Mundell, S. W., Hamel, M. J., Esser, B., Huston, D., and Arms, S. W. (2003).
“Civil Structure Strain Monitoring with Power-efficient, High-speed Wireless Sensor Networks,” in Proceedings of the 4th International Workshop on Structural Health Monitoring, Stanford, CA, September 15–17, 1215–1222.
Gawthrop, P. J., Wallace, M. I., Wagg, D. J.. 2005. Bond-graph based substructuring of dynamical systems, Earthquake Engineering and Structural Dynamics, 34(6), 687-703.
Gawthrop, P. J., Wallace, M. I., Neild, S. A., Wagg, D. J.. 2007. Robust real-time substructuring techniques for under-damped systems, Struct. Control Health Monit., 14: 591-608.
GeorgeE.Smith. (2009). The invention and early history of the CCD. Nuclear Instruments and Methods in Physics Research A , pp. 1-6.
Georgopoulos, A., Tournas, E., Mouzakis, Ch., Vougioukas, E., Carydis, P. Determination of Seismic Movements of Monuments using Stereoscopic Video. Proceedings of 3rd Conference on Optical 3-D Measurement Techniques, October 2-4 1995, Vienna.
State-of-the-art report for JRA2
210
Georgopoulos, A., Tournas, E., 1996. Towards an Operational Digital Video Photogrammetric System for 3-D Measurements. International Archives of Photogrammery and Remote Sensing, Vol. XXXI, part B2, Commission II, pp. 111-116.
Georgopoulos, A., Tournas, E., 2001. Stereoscopic Video Imaging using a low-cost PC based system. Videometrics VII, 2001 Santa Clara.
Glaser, S. D., 2004, “Some Real-world Applications of Wireless Sensor Nodes,” in Smart Structures and Materials: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, San Diego, CA, March 15–18, Proceedings of the SPIE, Vol. 5391, 344–355.
Gravouil A., Combescure A.. 2001. Multi-time-step explicit-implicit method for non-linear structural dynamics. International Journal for Numerical Methods in Engineering. 50:199-225.
Grueger et al.. 2003. Performances and application of a spectrometer with micromachined scanning grating, Integrated Optoelectronics Devices, SPIE, San Jose, CA.
Hans Poisel, 2008, Low-cost fiber-optic strain sensor for structural monitoring, SPIE, 10.1117/2.1200803.1053. Hong-Nan Li, Dong-Sheng Li and Gang-Bing Song, 2004, Recent applications of fiber optic sensors to health
monitoring in civil engineering, Engineering Structures 26: 1647–1657. Inaudi D, Casanova N, Kronenberg P, Marazzi S & Vurpillot S. 1997. Embedded and surface mounted fiber optic
sensors for civil structural monitoring. Smart Structures and Materials Conference, San Diego, SPIE (Vol 3044). 236d243.
Inaudi D, Elamari A, Pflug L, Gisin N, Breguet J & Vurpillot S. 1994. Low-coherence deformation sensors for the monitoring of civil-engineering structures. Sensor and Actuators A 44: 125d130.
Inaudi D, Nicoletta CASANOVA, Samuel VURPILLOT, Pascal KRONENBERG, Giovanni MARTINOLA, Gilbert STEINMANN, Jean-François M ATHIER, 1999, “SOFO: STRUCTURAL MONITORING WITH FIBER OPTIC SENSORS”, FIB,"Monitoring and Safety Evaluation of Existing Concrete Structures", 12-13.2, Vienna, Austria.
Inaudi D. 1997. Field testing and application of fiber optic displacement sensors in civil structures. 12th International Conference on OFS ’97eOptical Fiber Sensors, Williamsbourg, OSA Technical Digest Series (Vol 16). 596d599.
Inaudi D. 2000. Application of civil structural monitoring in Europe using fiber optic sensors, Prog. Struct. Engng Mater. 2: 351d358.
Jacobson Bo. The Stribeck memorial lecture. Tribology International, 36(11):781 - 789, 2003. NORDTRIB symposium on Tribology 2002.
James Janesick, Dueling detectors. (2002, February). spie ’s oe magazine . Jan Bogaerts, Piet De Moor, Koen De Munck, Deniz Sabuncuoglu Tezcan and Chris Van Hoof, Development of
CMOS active pixel sensors for earth observations, Proceedings 5th EARSeL Workshop on Imaging Spectroscopy. Bruges, Belgium, April 23-25 2007
Janesick, J. (2001). Scientific Charge-Coupled Devices. SPIE Publications. Janesick, J. (2003, December). CMOS imagers can be charge-coupled. Laser Focus World . Janesick, J, (2007), Photon Transfer, SPIE Society of Photo-Optical Instrumentation Engi Janesick, J., Elliott, T., Tower, J. (2008, July). CMOS Detectors: Scientific monolithic CMOS imagers come of age.
Laser Focus World . Jelali M. and A. Kroll. Hydraulic Servo-systems, pages 70-72. Springer, 2003. Modelling, Identification and
Control.
State-of-the-art report for JRA2
211
Jerome P. Lynch, Aaron Partridge, Kincho H. Law, Thomas W. Kenny, Anne S. Kiremidjian and Ed Carryer. 2003. Design of Piezoresistive MEMS-Based Accelerometer for Integration with Wireless Sensing Unit for Structural Monitoring, Journal of Aerospace Engineering, Vol. 16, No. 3, July.
Jerome P. Lynch and Kenneth J. Loh (March 2006). A Summary Review of Wireless Sensors and Sensor Networks for Structural Health Monitoring. The Shock and Vibration Digest, Vol. 38, No. 2, 98-128
Jia Chuanguo. 2010. Monolithic and partitioned rosenbrock-bassed time integration methods for dynamic substructure tests. PhD thesis in the University of Trento, Italy.
Jia C. Bursi, O.S. Bonelli A. and Wang Z., 2011. Novel partitioned integration methods for DAE sys-tems based on L-stable linearly implicit algorithms, International Journal for Numerical Methods in Engineering, in print.
Jin-Song Pei, Chetan Kapoor, Troy L. Graves-Abe, Yohanes P. Sugeng, and Jerome P. Lynch (2007). An experimental investigation of the data delivery performance of a wireless sensing unit designed for structural health monitoring. Struct. Control Health Monit. 2008; 15:471–504.
Juang Jer-nan and Phan Minh Q.. 2001. Identification and Control of Mechanical System. The press syndicate of the university of Cambridge.
Judy J. W.. 2001. Microelectromechanical Systems (MEMS): Fabrication, Design and Applications, Smart Mater. Struct., Vol. 10, No. 6, pp.1115-1134.
Jung P., Stammen J. and Greifendorf D.. 2001. Future microelectronic hardware concepts for wireless communication beyond 3G, Proc. of the WWRF second meeting, Helsinki.
Jung, R. Y. 2005. Development of real-time hybrid system. Doctoral thesis. University of Colorado. Jürgen Braunstein, Jerzy Ruchala, and Bernard Hodac. 2002. SMART STRUCTURES: FIBER-OPTIC
DEFORMATION AND DISPLACEMENT MONITORING, First International Conference on Bridge Maintenance, Safety and Management, IABMAS 2002, Barcelona, 14 – 17 July.
Kincho H. Law, Andrew Swartz, Jerome P. Lynch, Yang Wang (2008). Wireless Sensing and Structural Control Strategies. The Fourth International Workshop on Advanced Smart Materials and Smart Structures Technologies, Tokyo, Japan, June 24-25, 2008.
Kodak. (2008). Retrieved from http://www.kodak.com/global/plugins/acrobat/en/business/ISS/datasheet/fullframe-/KAF8300-LongSpec.pdf
Kolb A., Barth E., Koch R., and Larsen R., 2010. Time-of-flight sensors in computer graphics, Computer Graphics Forum, 29(1):141–159.
Kottapalli, V. A., Kiremidjian, A. S., Lynch, J. P., Carryer, E., Kenny, T. W., Law, K. H., and Lei, Y., 2003, “Two-tiered Wireless Sensor Network Architecture for Structural Health Monitoring,” in Smart Structures and Materials, San Diego, CA, March 3–6, Proceedings of the SPIE, Vol. 5057, 8–19.
Kuehn J. and D. Epp and W. N. Patten. High-Fidelity Control of a seismic Shake Table. Earthquake Engineering and Structural Dynamics, 28:1235-254, 1999.
Kung-Chun Lu, Chin-Hsiung Loh, Yuan-Sen Yang, Jerome P. Lynch and K. H. Law (2008). REAL-TIME STRUCTURAL DAMAGE DETECTION USING WIRELESS SENSING AND MONITORING SYSTEM.Journal of Smart Structural System, Revised -January 30, 2008.
Kyrychko1, Y.N, Blyuss K.B , Gonzalez-Buelga A, Hogan S.J and Wagg D.J. 2006. Real-time dynamic substructuring in a coupled oscillator–pendulum system. Proc. R. Soc. A, vol. 462 no. 2068:1271-1294.
Lamarche CP. 2009. Development of Real-Time Dynamic Substructuring Procedures for the Seismic Testing of Steel Structures. Doctoral thesis.
Lathuilière, A.-C., & Capéran, Ph. (2007b). Use of low-cost digital consumer camera for stereo-photogrammetry of structures undergoing destructive tests. OPTIMESS2007 Workshop 28th-30th May 2007. Leuven.
Li, X., Peng, G.D., Rizos, C., Ge, L., Tamura, Y., Yoshida, A., 2003. Integration of GPS, accelerometers and optical fibre sensors for structural deformation monitoring. 2003 Int. Symp. on GPS/GNSS, Tokyo, Japan, 15-18 November, 617-624.
Lim, C. N., Neild, S. A., Stoten, D. P., Drury, D., Taylor, C. A.. 2007. Adaptive Control Strategy for Dynamic Substructuring Tests. Journal of Engineering Mechanics, 864-873.
Litwiller, D. (2001). CCD vs. CMOS: Facts and Fiction. photonics spectra . Litwiller, D. (2005). CMOS vs. CCD: Maturing Technologies, Maturing Markets. Photonics Spectra Loh Chin-Hsiung, Lynch Jerome P., Lu Kung-Chun, Wang Yang, Chang Chia-Ming, Lin Pei-Yang and Yeh Ting-
Hei (2007). Experimental verification of a wireless sensing and control system for structural control using MR dampers. Earthquake Engng Struct. Dyn. 2007; 36:1303–1328.
Lucas, B., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. Proceedings of the International Joint Conference on Artificial Intelligence, (pp. 674–679).
Lynch, J. P., Law, K. H., Kiremidjian, A. S., Kenny, T. W., Carryer, E., and Partridge, A., 2001, “The Design of a Wireless Sensing Unit for Structural Health Monitoring,” in Proceedings of the 3rd International Workshop on Structural Health Monitoring, Stanford, CA, September 12–14.
Lynch, J. P., Law, K. H., Kiremidjian, A. S., Kenny, T. W., and Carryer, E., 2002a, “A Wireless Modular Monitoring System for Civil Structures.” in Proceedings of the 20th International Modal Analysis Conference (IMAC XX), Los Angeles, CA, February 4–7, 1–6.
Lynch, J. P., Law, K. H., Kiremidjian, A. S., Carryer, E., Kenny, T. W., Partridge, A., and Sundararajan, A., 2002b, “Validation of a Wireless Modular Monitoring System for Structures,” in Smart Structures and Materials: Smart Systems for Bridges, Structures, and Highways, San Diego, CA, March 17–21, Proceedings of the SPIE, Vol. 4696, No. 2, 17–21.
Lynch, J. P., Sundararajan, A., Law, K. H., Kiremidjian, A. S., Carryer, E., Sohn, H., and Farrar, C. R., 2003, “Field Validation of a Wireless Structural Health Monitoring System on the Alamosa Canyon Bridge,” in Smart Structures and Materials: Smart Systems and Nondestructive Evaluation for Civil Infrastructures, San Diego, CA, March 3–6, Proceedings of the SPIE, Vol. 5057, 267–278.
Lynch, J. P., Parra-Montesinos, G., Canbolat, B. A., and Hou, T-C., 2004, “Real-time Damage Prognosis of High-performance Fiber Reinforced Cementitious Composite Structures,” in Proceedings of Advances in Structural Engineering and Mechanics (ASEM’04), Seoul, Korea, September 2–4.
Lynch, J. P., Sundararajan, A., Law, K. H., Carryer, E., Farrar, C. R., Sohn, H., Allen, D. W., Nadler, B., and Wait, J. R., 2004a, “Design and Performance Validation of a Wireless Sensing Unit for Structural Health Monitoring Applications,” Structural Engineering and Mechanics, Vol. 17, No. 3, 393–408.
Lynch, J. P. (2005). “Design of a Wireless Active Sensing Unit for Localized Structural Health Monitoring,” Journal of Structural Control and Health Monitoring, Vol. 12, No. 3–4, 405–423.
Lynch J. P., Wang Y., R. A. Swartz, K. C. Lu and C. H. Loh (2008). Implementation of a closed-loop structural control system using wireless sensor networks. Struct. Control Health Monit. 2008; 15:518–539
Marias K, Behrenbruch C.P., Brady J.M., Parbhoo S., Seilalian A.M. (2000) Non-rigid registration of temporal mammogram pairs via a combination of boundary and internal landmarks. Proceedings of Int. Workshop on Digital Mammography.
State-of-the-art report for JRA2
213
Maser, K., Egri, R., Lichtenstein, A., and Chase, S. (1996). “Field Evaluation of a Wireless Global Bridge Evaluation and Monitoring System,” in Proceedings of the 11th Conference on Engineering Mechanics, Fort Lauderdale, FL, May 19–22, Vol. 2, 955–958.
Mastroleon, L., Kiremidjian, A. S., Carryer, E., and Law, K. H., 2004, “Design of a New Power-efficient Wireless Sensor System for Structural Health Monitoring,” in Non-destructive Detection and Measurement for Homeland Security II, San Diego, CA, March 16–17, Proceedings of the SPIE, Vol. 5395, 51–60, Materials and Civil Infrastructure, Newport Beach, CA, March 18– 19, Proceedings of the SPIE, Vol. 4704, 215–224.
McGlone J.C. (ed.), 2004. Manual of Photogrammetry. American Society for Photogrammetry and Remote Sensing, 5th edition, Bethesda, Maryland.
Measures, Raymond M., Structural Monitoring with Fiber Optic Technology, 03-2001, Book, Publisher: Academic Press, ISBN: 9780080518046.
Merritt H. E. Hydraulic Control Systems. Wiley, 1967. Mitchell, K., Rao, V. S., and Pottinger, H. J., 2002, “Lessons Learned About Wireless Technologies for Data
Acquisition,” in Smart Structures and Materials 2002: Smart Electronics, MEMS, and Nanotechnology, San Diego, CA, March 18–21, Proceedings of the SPIE, Vol. 4700, 331–341.
Modular Monitoring System for Structures,” in Smart Structures and Materials: Smart Systems for Bridges, Structures, and Highways, San Diego, CA, March 17–21, Proceedings of the SPIE, Vol. 4696, No. 2, 17–21.
Morari, M. and Zariou, E. 1989. Robust Process Control. Prentice Hall. Mosqueda Gilberto, Stojadinovic Bozidar, and Mahin Stephen. 2007a. Real-time error monitoring for hybrid
simulation. Part I: methodology and experimental verification. Journal of structural engineering, 133(8): 1100-1108.
Mosqueda Gilberto, Stojadinovic Bozidar, and Mahin Stephen. 2007b. Real-time error monitoring for hybrid simulation. Part II: structural response modification due to errors. Journal of structural engineering, 133(8): 1109-1117.
Nagayama, T., Ruiz-Sandoval, M., Spencer, B. F. Jr., Mechitov, K. A., and Agha, G., 2004, “Wireless Strain Sensor Development for Civil Infrastructure,” in Proceedings of the 1st International Workshop on Networked Sensing Systems, Tokyo, Japan, June 22–23.
Nakashima M. et al. 1992. Development of Real-time Pseudodynamic Testing, Earthquake Engineering and Structural Dynamics. 21(1): 79-92.
Nakanishi, J., Mistry, M. & Schaal, S. 2007. Inverse dynamics control with floating base and constraints. In 2007 IEEE International Conference on Robotics and Automation, pp. 1942-1947.
Nii O. Attoh-Okine, Stephen Mensah. 2002. MEMS Application in Pavement Condition Monitoring: Challenges, International Symposium on Automation and Robotics in Construction, 19th (ISARC). Proceedings. National Institute of Standards and Technology, Gaithersburg, Maryland. September 23-25, 2002, 387-392 pp.
Normey-Rico J.E. and Camacho E.F.. 2007 Control of Dead-time Process. Springer Ogata, K., 1970, Modern Control Engineering, ISBN: 0-13-615673-8. Oggier, Th., Büttgen, B., Lustenberger, F., 2007. SwissRanger SR3000 and First Experiences based on Miniaturized
3D-TOF Cameras, Swiss Center for Electronics and Microtechnology, CSEM, Zurich. Proc. SPIE, 5249:534–545.
Ou, J., Li, H., and Yu, 2004, “Development and Performance of Wireless Sensor Network for Structural Health Monitoring,” in Smart Structures and Materials, San Diego, CA, March 15–18, Proceedings of the SPIE, Vol. 5391, 765–773.
Pakzad, S. N. and Fenves, G. L., 2004, “Structural Health Monitoring Applications Using MEMS Sensor Networks,” in Proceedings of the 4th International Workshop on Structural Control, New York, NY, June 10–11, 47–56.
Pegon, P. and Magonette, G.. 2002. Continuous PsD testing with non-linear substructuring: Presentation of a stable parallel Inter-Field procedure. JRC special publication, No. I.02.167, E. C., JRC, ELSA, Italy.
Penzien, J. Wu C. L. 1998. Stresses In Linings Of Bored Tunnels, Earthquake Engineering and Structural Dynamics, Vol. 27, 3. pp 283-300.
Pozzi Matteo, Zonta Daniele, Trapani Davide (UniTN), Laboratory tests on Phase I (2009)- wireless accelerometers, MEMSCON, SPECIFIC TARGETED RESEARCH PROJECT- NMP- CP-TP 212004-2.
R. Turchetta, CMOS monolithic active pixel sensors (MAPS) for scientific applications: Some notes about radiation hardness, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, Volume 583, Issue 1, 11 December 2007, Pages 131-133 Proceedings of the 6th International Conference on Radiation Effects on Semiconductor Materials, Detectors and Devices - RESMDD 2006
Rafael Aguilar, Luis F. Ramos and Paulo B. Lourenço. 2009. Structural Dynamic Monitoring in Historical Masonry Structures using Wireless and MEMS Technology, 1st WTA-International PhD Symposium - Building Materials and Building Technology for Preservation of the Built Heritage, 10 August- 10 September.
Rappaport, T. S. (2002), ‘‘Wireless Communications: Principles and Practice’’, Prentice-Hall, Englewood Cliffs, NJ.
Reibel, Y.; Jung, M.; Bouhifd, M.; Cunin, B.; Draman, C. , CCD or CMOS camera noise characterization, 2003, The European Physical Journal Applied Physics, Volume 21, Issue 1, January 2003, pp.75-80
Reiterer, A., Lehmann, M., Miljanovic, M., Ali, H., Paar, G., Egly, U., Eiter, T., Kahmen, H., 2008. Deformation Monitoring using a new kind of Optical 3D Measurement System: Components and Perspectives. 13th FIG Symposium on Deformation Measurement and Analysis/ 4th IAG Symposium on Geodesy for Geotechnical and Structural Engineering, LNEC, Lisbon, May 2008.
Ren W., Steurer M., and Woodruff S. 2007. Accuracy evaluation in power hardware-in-the-loop (PHIL) simulation center for advanced power systems. Proceedings of the 2007 summer computer simulation conference, SESSION: Computational modeling and simulation of embedded systems: modeling and simulation of real-time embedded systems, San Diego, California. Pages: 489-493.
Robertson, G. 2006. Precise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting. ISPRS Archives, Comission V Dresden 2006.
Rosenbrock, H.H. 1963. Some general implicit processes for the numerical solution of differential equations. Computer Journal 5, 329-330.
Ruiz-Sandoval, M., Spencer, B. F. Jr., and Kurata, N., 2003, “Development of a High Sensitivity Accelerometer for the Mica Platform,” in Proceedings of the 4th International Workshop on Structural Health Monitoring, Stanford, CA, September 15–17, 1027–1034.
Ruiz-Sandoval, M. E., 2004, “ ‘Smart’ Sensors for Civil Infrastructure Systems,” Doctor of Philosophy Thesis, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN.
Saouma Victor, Sivaselvan Mettupalayam V. Editors. 2008. Hybrid Simulation - Theory, implementation and applications. Taylor&Francis.
Schenk, H., Wolter, A. and Lakner, H.. 2001. Design optimization of an electrostatically driven Micro Scanning Mirror, In MOEMS and Miniaturized Systems II. Bellingham, Wash. SPIE, 35-44.
Shing P.B., Mahin Stephen. 1990. Experimental error effects in pseudodynamic testing. Journal of Engineering Mechanics, 116(4): 805-821.
Shing P.B., Vannan Mani, and Cater Edward. 1991. Implicit time integration for pseudodynamic tests, Earthquake engineering and structural dynamics, 20: 551-576.
Sivaselvan, M. V. 2006. A unified view of hybrid seismic simulation algorithms, Workshop. Spinnler G. Conception des Machines, volume 1 - Statique, pages 221-225. Presse Polytechniques et Universitaires
Romandes, 1997. Stoten, D. P. & Gómez, E. G. 2001. Adaptive control of shaking tables using the minimal control synthesis
algorithm. Philosophical Transactions of The Royal Society 359, No 1786, 1697-1723. Stoten, D. P., Hyde, R. A.. 2006. Adaptive control of dynamically substructured systems: the single-input single-
output case, Proc. IMechE, Vol 220, Part I: J. Systems and Control Enginnering, 63-79. Straser, E. G. and Kiremidjian, A. S. (1998). “A Modular, Wireless Damage Monitoring System for Structures.”
Technical Report 128, John A. Blume Earthquake Engineering Center, Stanford University, Stanford, CA. Stribeck R. Die wesentlichen Eigenshaften der Gleit- und Rollenlager. Zeitschrift des Vereines deutscher
Ingenieure volume = 46, (37), 1902. Structural Health Monitoring and Smart Structures, Tokyo, Japan, November 10–11. Tagawa, Y. & Fukui, K. 1994. Inverse dynamics calculation of nonlinear model using low sensitivity compensator.
In Proceedings of Dynamics and Design Conference, Akita, pp. 185-188. Tait, M., Couloigner, I., Guzman, M.J., Lissel, S. L., 2007. Vision Based Deformation Monitoring of a Masonry
Wall under Simulated Earthquake Conditions. 15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, September 3-7, 2007.
Tanner, N. A., Farrar, C. R., and Sohn, H., 2002, “Structural Health Monitoring Using Wireless Sensing Systems with Embedded Processing,” in Non-destructive Evaluation and Health Monitoring of Aerospace
Tanner, N. A., Wait, J. R., Farrar, C. R., and Sohn, H., 2003, “Structural Health Monitoring Using Modular Wireless Sensors,” Journal of Intelligent Material Systems and Structures, Vol. 14, No. 1, 43–56.
Technical Committee 1, TWG 1.3, 1986, Recommended testing procedures for assessing the behaviour of structural steel elements under cyclic loads ECCS, No. 45.
Thayer W. J. Transfer functions for Moog servovalve. Technical report, MOOG Inc. Controls Division, East Aurora, NY 14052, 1958.
Tournas, E., 1999. Developing a Videometry System for Monitoring Dynamic Phenomena. PhD thesis, Lab of Photogrammetry, NTUA (in Greek).
Tu, J. Y., Lin, P. Y., Stoten, D. P. & Li, G. 2009. Testing of dynamically substructured, base-isolated systems using adaptive control techniques. Earthquake Engineering & Structural Dynamics. (doi:10.1002/eqe.962)
Turchetta R., CMOS monolithic active pixel sensors (MAPS) for scientific applications: Some notes about radiation hardness, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, Volume 583, Issue 1, 11 December 2007, Pages 131-133 Proceedings of the 6th International Conference on Radiation Effects on Semiconductor Materials, Detectors and Devices - RESMDD 2006.
Wagg D.J., Stoten D.P.. 2001. Substructuring of Dynamical Systems via the Adaptive Minimal Control Synthesis Algorithm. Earthquake Engng Struct. Dyn. 30: 865-877
Wallace, M. I., Wagg, D. J., Neild, S. A., 2005a, An adaptive polynomial based forward prediction algorithm for multi-actuator real-time dynamic substructuring, Proc. R. Soc. A, 461, 3807-3826.
Wallace, M. I., Sieber, J., Neild, S. A., Wagg, D. J., Krauskopf, B., 2005b, Stability analysis of real-time dynamic substructuring using delay differential equation models, Earthquake Engng Struct. Dyn., 461, 3807-3826.
Wang, H. & Xie, Y. 2009. Adaptive inverse dynamics control of robots with uncertain kinematics and dynamics. Automatica 45, 2114-2119.
State-of-the-art report for JRA2
216
Wang, M. L., Gu, H., Lloyd, G. M., and Zhao, Y., 2003a, “A Multichannel Wireless PVDF Displacement Sensor for Structural Monitoring,” in Proceedings of the International Workshop on Advanced Sensors,
Wang, Y., Lynch, J. P., and Law, K. H., 2005, “Wireless Structural Sensors Using Reliable Communication Protocols for Data Acquisition and Interrogation,” in Proceedings of the 23rd International Modal Analysis Conference (IMAC XXIII), Orlando, FL, January 31– February 3.
Weng Jian-Huang, Loh Chin-Hsiung, Jerome P. Lynch, Kung-Chun Lu, Pei-Yang Lin, Yang Wang (2008). Output-only modal identification of a cable-stayed bridge using wireless monitoring systems. Engineering Structures 30 (2008) 1820–1830.
Widrow B. and Walach E. 2008. Adaptive inverse control. John Wiley & Sons, Inc., Hoboken, New Jersey. Wikipedia. 2009. http://en.wikipedia.org/wiki/Model_predictive_control. Williams D. M. and Williams M. S. and A. Blakeborough. Numerical Modelling of a Servohydraulic Testing
System for Structures. Journal of Engineering Mechanics, 127(8):816-827, August 2001. Wu B., Bao H., Ou J., Tian S.. 2005. Stability and Accuracy Analysis of Central Difference Method for Real-time
Substructure Testing. Earthquake Engineering and Structural Dynamics. 34: 705-718 Wu, B., Xu, G., Wang, Q., Williams, M.S. 2006. Operator-splitting method for real-time substructure testing.
Earthquake Engineering and Structural Dynamics, 35(3), 293-314. YANG WANG, JEROME P. LYNCH and KINCHO H. LAW. A wireless structural health monitoring system with
multithreaded sensing devices: design and validation. Structure and Infrastructure Engineering, Vol. 3, No. 2, June 2007, 103 – 120.
Yung R., Shing P.B.. 2006. Performance Evaluation of a Real-time Pseudodynamic Test System. Earthquake Engineering and Structure Dynamics. 789-810.
Yung R., Shing P.B., Stauffer Eric and Thoen Bradford. 2007. Performance of a Real-time Pseudodynamic Test System considering nonlinear structural response. Earthquake Engineering and Structure Dynamics. 1785-1809.
Zhao, J., French, C., Shield, C., Posbergh, T., 2003, Considerations for the development of real-time dynamic testing using servo-hydraulic actuation, Earthquake Engng Stuct. Dyn., 32, 1773-1794.
Zhao, F. and Guibas, L.. 2004. Wireless Sensor Networks: An Information Processing Approach, Morgan Kaufman, San Francisco, CA.
Zhou, W., Chew, C.-M. & Hong, G.-S. 2006. Inverse dynamics control for series damper actuator based on MR fluid damper. In Proceedings, 2005 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 473-478.