Towards Dynamic Master Determination in MEMS-based Micro-Scanning LiDAR Systems Philipp Stelzer 1 , Andreas Strasser 1 , Christian Steger 1 and Norbert Druml 2 Abstract— Automated driving has been expected for decades. The first systems, which at least partially automate the vehicle, have been installed in higher priced vehicles for several years. In the near future, however, many more competencies are to be transferred to the systems and the vehicle will thus be fully automated. Such systems receive their data from various sensor systems such as Light Detection And Ranging (LiDAR). Therefore, it is essential that this information is transmitted correctly and reliably to the environmental perception system. In order to ensure this, redundancy of sensors is a key factor in addition to diversity. For example, multiple, independent- ly controlled MEMS-based LiDAR systems can be operated synchronously. This requires the selection of a Master system which can be reliably followed by all Slave systems. In this publication, an architecture for MEMS-based Micro-Scanning LiDAR systems is proposed to determine the appropriate system as Master. The architecture has been implemented in an FPGA prototyping platform to demonstrate its feasibility and evaluate its performance. I. I NTRODUCTION Until now, Advanced Driver-Assistance Systems (ADAS) have taken over tasks such as lane keeping, but the driver had to be ready to intervene at any time [1]. For example, in Article 8 of the Vienna Convention on Road Traffic, some countries have agreed that the driver must have a continuous opportunity to gain control of the vehicle [2]. At the moment there are also some considerations about how, when, how long, and where a human can intervene in systems of auto- mated vehicles. Terken et al. [3], for example, have shown in their article about various use cases where shared control can occur. Flemisch et al. [4] also made considerations to this effect in their article. However, these approaches are only possible up to SAE level 3, since beyond that the driver is only a passenger without responsibilities. These SAE levels are well explained in SAE International Standard J3016 [5]. At the moment, this is still a vision for the future, but in a few years, highly automated vehicles will appear on the roads. Consequently, there have been several projects addressing this issue for some time, developing concepts and systems to enable SAE level 4 and 5 of vehicles. In the PRYSTINE project, for instance, camera, robust Radio Detection And Ranging (RADAR), and Light Detection And Ranging (LiDAR) sensors are deployed to achieve Fail-operational Urban Surround perceptION (FUSION) [6]. Figure 1 illustrates this concept of PRYSTINE from the intended FUSION. How such fail-operational architectures 1 Institute of Technical Informatics, Graz University of Technology, 8010 Graz, Austria {stelzer, strasser, steger}@tugraz.at 2 ATV Sense and Control, Infineon Technologies Austria AG, 8020 Graz, Austria {norbert.druml}@infineon.com for environmental perception systems can be designed is also described by Kohn et al. [7] in their publication. Fail- operational behaviour is mandatory for high automated vehi- cles of SAE level 4 and 5 and is also claimed by Vermesan et al. [8] and Macher et al. [9]. In the absence of the driver at SAE levels 4 and 5, the vehicle requires multiple systems re- sponsible for execution of steering, acceleration/deceleration and monitoring of driving environment. Also, the fall-back performance of dynamic driving task is no longer the driver, but from SAE level 4 the system as well. Such fusion systems combine data from different sensors and thus provide reliable information about the driving environment in the fusion of these data. Koci et al. [10] and De Silva et al. [11] discussed in their publications the importance of such sensors and data fusion for automated vehicles. Therefore, affordable automotive qualified RADAR and LiDAR components that can be deployed in large vehicle fleets for environmental detection are important. An automotive qualified MEMS- based LiDAR has been presented by Yoo et al. [12], which can be a key for affordable LiDAR sensor systems in highly automated vehicles. But to achieve the largest possible Field- of-View (FoV), multiple MEMS-based LiDAR systems must be synchronised. For this purpose, a Master system has to be selected that operates a MEMS mirror that has the appropriate physical properties so that all Slave systems can reliably follow it. If the LiDAR systems were operated asyn- chronously, so-called ghost objects could occur, as described by Baumgart et al. [13]. Therefore, our publication addresses the determination of an appropriate Master system for the synchronisation of multiple independently controlled MEMS Fig. 1. PRYSTINE’s concept view of a Fail-operational Urban Surround perceptION (FUSION) [6].
7
Embed
Towards Dynamic Master Determination in MEMS-based Micro ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Towards Dynamic Master Determination in MEMS-based
Micro-Scanning LiDAR Systems
Philipp Stelzer1, Andreas Strasser1, Christian Steger1 and Norbert Druml2
Abstract— Automated driving has been expected for decades.The first systems, which at least partially automate the vehicle,have been installed in higher priced vehicles for several years.In the near future, however, many more competencies are tobe transferred to the systems and the vehicle will thus befully automated. Such systems receive their data from varioussensor systems such as Light Detection And Ranging (LiDAR).Therefore, it is essential that this information is transmittedcorrectly and reliably to the environmental perception system.In order to ensure this, redundancy of sensors is a key factorin addition to diversity. For example, multiple, independent-ly controlled MEMS-based LiDAR systems can be operatedsynchronously. This requires the selection of a Master systemwhich can be reliably followed by all Slave systems. In thispublication, an architecture for MEMS-based Micro-ScanningLiDAR systems is proposed to determine the appropriate systemas Master. The architecture has been implemented in an FPGAprototyping platform to demonstrate its feasibility and evaluateits performance.
I. INTRODUCTION
Until now, Advanced Driver-Assistance Systems (ADAS)
have taken over tasks such as lane keeping, but the driver
had to be ready to intervene at any time [1]. For example, in
Article 8 of the Vienna Convention on Road Traffic, some
countries have agreed that the driver must have a continuous
opportunity to gain control of the vehicle [2]. At the moment
there are also some considerations about how, when, how
long, and where a human can intervene in systems of auto-
mated vehicles. Terken et al. [3], for example, have shown in
their article about various use cases where shared control can
occur. Flemisch et al. [4] also made considerations to this
effect in their article. However, these approaches are only
possible up to SAE level 3, since beyond that the driver is
only a passenger without responsibilities. These SAE levels
are well explained in SAE International Standard J3016 [5].
At the moment, this is still a vision for the future, but
in a few years, highly automated vehicles will appear on
the roads. Consequently, there have been several projects
addressing this issue for some time, developing concepts
and systems to enable SAE level 4 and 5 of vehicles. In
the PRYSTINE project, for instance, camera, robust Radio
Detection And Ranging (RADAR), and Light Detection
And Ranging (LiDAR) sensors are deployed to achieve
The authors would like to thank all national funding
authorities and the ECSEL Joint Undertaking, which funded
the PRYSTINE project under the grant agreement number
783190.
PRYSTINE is funded by the Austrian Federal Ministry
of Transport, Innovation and Technology (BMVIT) under
the program ICT of the Future between May 2018 and
April 2021 (grant number 865310). More information: http-
s://iktderzukunft.at/en/.
REFERENCES
[1] C. Brunglinghaus, “Wie das Recht automatisiertes Fahren hemmt,”ATZ - Automobiltechnische Zeitschrift, vol. 117, no. 4, pp. 8–13, Apr2015. [Online]. Available: https://doi.org/10.1007/s35148-015-0039-0
[2] United Nations Conference on Road Traffic, “19 . Conventionon Road Traffic,” https://treaties.un.org/pages/ViewDetailsIII.aspx?src=TREATY&mtdsg no=XI-B-19&chapter=11&Temp=mtdsg3&clang= en, retrieved: October, 2019. [Online]. Available:https://treaties.un.org/pages/ViewDetailsIII.aspx?src=TREATY&mtdsg no=XI-B-19&chapter=11&Temp=mtdsg3&clang= en
[3] J. Terken and B. Pfleging, “Toward shared control between automatedvehicles and users,” Automotive Innovation, vol. 3, no. 1, pp. 53–61,2020.
[4] F. Flemisch, D. Abbink, M. Itoh, M.-P. Pacaux-Lemoine, andG. Weel, “Shared control is the sharp end of cooperation: Towardsa common framework of joint action, shared control and humanmachine cooperation,” IFAC-PapersOnLine, vol. 49, no. 19, pp. 72 –77, 2016, 13th IFAC Symposium on Analysis, Design, and EvaluationofHuman-Machine Systems HMS 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2405896316320547
[5] SAE, “SAE International Standard J3016 - Taxonomy and Definitionsfor Terms Related to On-Road Motor Vehicle Automated DrivingSystems,” SAE International, Standard, January 2014.
[6] N. Druml, G. Macher, M. Stolz, E. Armengaud, D. Watzenig, C. Ste-ger, T. Herndl, A. Eckel, A. Ryabokon, A. Hoess, S. Kumar, G. Dim-itrakopoulos, and H. Roedig, “Prystine - programmable systems forintelligence in automobiles,” in 2018 21st Euromicro Conference on
Digital System Design (DSD), Aug 2018, pp. 618–626.
[7] A. Kohn, R. Schneider, A. Vilela, A. Roger, and U. Dannebaum,“Architectural Concepts for Fail-Operational Automotive Systems,” inSAE 2016 World Congress and Exhibition. SAE International, April2016.
[8] O. Vermesan, R. Bahr, R. John, M. Ottella, R. Gjølstad, O. Buckholm,and H.-E. Sand, “Advancing the Design of Fail-Operational Architec-tures, Communication Modules, Electronic Components, and Systemsfor Future Autonomous/Automated Vehicles,” in Intelligent System
Solutions for Auto Mobility and Beyond, C. Zachaus and G. Meyer,Eds. Cham: Springer International Publishing, 2021, pp. 53–71.
[9] G. Macher, N. Druml, O. Veledar, and J. Reckenzaun, “Safety andSecurity Aspects of Fail-Operational Urban Surround perceptION(FUSION),” in Model-Based Safety and Assessment, Y. Papadopoulos,K. Aslansefat, P. Katsaros, and M. Bozzano, Eds. Cham: SpringerInternational Publishing, 2019, pp. 286–300.
[10] J. Koci, N. Jovii, and V. Drndarevi, “Sensors and Sensor Fusionin Autonomous Vehicles,” in 2018 26th Telecommunications Forum
(TELFOR), 2018, pp. 420–425.
[11] V. De Silva, J. Roche, and A. Kondoz, “Robust Fusion ofLiDAR and Wide-Angle Camera Data for Autonomous MobileRobots,” Sensors, vol. 18, no. 8, 2018. [Online]. Available:https://www.mdpi.com/1424-8220/18/8/2730
[12] H. W. Yoo, N. Druml, D. Brunner, C. Schwarzl, T. Thurner, M. Hen-necke, and G. Schitter, “MEMS-based lidar for autonomous driving,”e & i Elektrotechnik und Informationstechnik, vol. 135, no. 6, pp.408–415, Oct 2018.
[13] M. Baumgart, C. Consani, M. Dielacher, and N. Druml, “Opticalsimulation of Time-of-Flight sensor accuracy in rain,” in The Euro-
pean Conference on Lasers and Electro-Optics. Optical Society of
America, 2017, p. CH P 25.[14] Velodyne LiDAR, “HDL-64E,” 2016.[15] R. Halterman and M. Bruch, “Velodyne HDL-64E lidar for unmanned
surface vehicle obstacle detection,” in Unmanned Systems Technology
XII, G. R. Gerhart, D. W. Gage, and C. M. Shoemaker, Eds., vol.7692, International Society for Optics and Photonics. SPIE, 2010,pp. 123 – 130. [Online]. Available: https://doi.org/10.1117/12.850611
[16] D. Wang, C. Watkins, and H. Xie, “MEMS Mirrors for LiDAR: AReview,” Micromachines, vol. 11, no. 5, 2020. [Online]. Available:https://www.mdpi.com/2072-666X/11/5/456
[17] N. Druml, I. Maksymova, T. Thurner, D. Van Lierop, M. Hennecke,and A. Foroutan, “1D MEMS Micro-Scanning LiDAR,” in The Ninth
International Conference on Sensor Device Technologies and Appli-
[19] B. Borovic, A. Q. Liu, D. Popa, H. Cai, and F. L. Lewis, “Open-loopversus closed-loop control of MEMS devices: choices and issues,”Journal of Micromechanics and Microengineering, vol. 15, no. 10, p.1917, 2005.
[20] P. Stelzer, A. Strasser, C. Steger, A. Garcia, and N. Druml, “Fast AngleAdaption of a MEMS-based LiDAR System,” IFAC-PapersOnLine,vol. 52, no. 15, pp. 55 – 60, 2019, 8th IFAC Symposium onMechatronic Systems MECHATRONICS 2019.
[21] P. Deng and W. Ma, “Nonlinearity investigation of the MEMSscanning mirror with electrostatic comb drive,” in The 9th IEEE
International Conference on Nano/Micro Engineered and Molecular
Systems (NEMS), 2014, pp. 212–215.[22] P. Stelzer, A. Strasser, C. Steger, H. Plank, and N. Druml, “To-
wards Synchronisation of Multiple Independent MEMS-based Micro-Scanning LiDAR Systems,” in 2020 IEEE Intelligent Vehicles Sympo-
sium (IV), 2020, pp. 1080–1085.[23] A. Strasser, P. Stelzer, C. Steger, and N. Druml, “Towards Synchronous
Mode of Multiple Independently Controlled MEMS Mirrors,” IFAC-
PapersOnLine, vol. 52, no. 15, pp. 31 – 36, 2019, 8th IFAC Sympo-sium on Mechatronic Systems MECHATRONICS 2019.