Top Banner
Android Camera Subsystem Author:Madhumitha
12
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Android cameraoverview

Android Camera Subsystem

Author:Madhumitha

Page 2: Android cameraoverview
Page 3: Android cameraoverview

Application Layer

The heart of any camera app is the Camera class. It provides methods to set parameters, auto-focus, and, of course, take a picture. When you initialize a Camera object, you’ll pass a SurfaceView that represents the area of the UI that will contain the camera preview image. To make your app record video, you’ll need a MediaRecorder object as well. This hooks into the media framework (which we’ll discuss below) and abstracts the task of recording audio and video.It expects a Camera object and a file descriptor for the output file, as well as a number of parameters.When an application developer needs to use the Camera, he grabs a Camera object; when he needs to record something, he grabs a MediaRecorder object.

Page 4: Android cameraoverview

JNI layer

The Java Native Interface (JNI) is what allows Android classes to use the native C++ libraries from within the Dalvik virtual machine. frameworks/base/core/jni/ or frameworks/base/media/jni/ and you’ll see C++ implementation files corresponding to many of the Android-specific Java classes – for example, android_hardware_camera.cpp.These contain methods for passing messages between their Java counterparts running inside a Dalvik virtual machine and a native C++ implementation of that class.

Page 5: Android cameraoverview

Native layer glue classes

Binder interface:-The way these processes communicate is through the Binder system. The binder system was designed by Google as a custom inter-process communication system for Android.-You will frequently see objects in the native libraries with names like ICamera, ICameraService, IMediaRecorder, and so on .These are objects that implement the Binder interfaces and thus represent proxy objects that marshal data across process boundaries.-Each header/implementation pair usually contains an interface class, IObject as well as a BpObject class and a BnObject class. Bp stands for binder proxy, the class that sits in the application process, and Bn stands for binder native, the class that sits in the remote process (such as the MediaServer or SurfaceFlinger) that is basically invisible to the end user.-Binder proxy objects call a transact() method which passes messages to the binder native object, which handles them with an onTransact() callback. transact() never returns until onTransact() returns, so they are synchronous.

Page 6: Android cameraoverview

Native proxies- The Camera and MediaRecorder objects, as well as the Surface object that is held by the

SurfaceHolder you may have passed to the Camera object, have native proxy objects with the same name sitting below the JNI layer; for example, the Camera object.

- One of these will be created for each Java object you instantiate on the top layer- They implement the native methods in the Java object, but in this case it really means that

they wrap a binder proxy object and call methods on it when told to by the JNI layer, which in turn become transactions across the process boundary to the binder native object

- Once the binder proxy receives a response from the other side, it passes it back up to the native proxy, which passes it back up through the JNI layer to you.

Page 7: Android cameraoverview

MediaServer

The MediaServer process is the heart of Android’s media framework. It wraps up everything that is necessary to make media playback and recording possible, such as codecs, file authoring, and connections to hardware abstraction layers (HALs).Upon startup, MediaServer launches server threads for each of its major functions, including the CameraService and the MediaPlayerService. Within the media server, our binder native objects correspond to client objects, which connect to their corresponding services.The camera service handles clients which sit on top of the hardware abstraction layer and correspond to Camera objects.Its primary purpose is to manage synchronization among clients – in other words, while several can be connected to the service, only one can actually use the camera at any given time.

Page 8: Android cameraoverview

StagefrightStagefright is a new addition to the Android source – although pieces of it have been in the source since Eclair, it was only fully implemented in Gingerbread.The job of Stagefright is to abstract (again with the abstraction!) the codec library.Stagefright is basically Google’s in-house version of OpenCORE, created with help from PV.The central class in the recording subsystem is StagefrightRecorder (header and implementation, confusingly under media/libmediaplayerservice/).A reference to a StagefrightRecorder object is bundled into initialized MediaRecorderClient objects.Given an encoding format, output format, and list of parameters, it selects an encoder and a MediaWriter object, as well as MediaSources representing data sources like the camera and microphone, then manages them as the MediaRecorder object tells it to (or complains if a bad combination of codec and file format were supplied at any point in the call stack).MediaWriter is actually an interface that we use as a simplification for a wide array of file authoring classes which implement it. These classes call on the codec library to encode media coming in from the camera and microphone and then write it to file in a particular formatThere is an MPEG4Writer, anAMRWriter, an ARTPWriter, and so on with each of the implemented file formats. Notice that this is the endpoint for file authoring: each of these classes has all it needs to write a video file in the correct format and store it to the SD card.

Page 9: Android cameraoverview

Freescale proprietary componentsAndroid is portable, so it doesn’t provide the actual glue to the hardware. Instead, it has a handful of interfaces, which the hardware designer can write implementations for.The camera, for example, is hidden under a hardware abstraction layer, the CameraHal class (under hardware/mx5x/libcamera/). The HAL actually contains the name of the camera driver (as well as other important system files it needs to query), and calls IOCTLs on it to set it up and use its functions. It also performs other important functions, such as encoding the picture in JPEG format or converting between color schemes.Then there’s the codec libraries. If you look on your Android device in /system/lib/ you’ll see a pile of .so files. These are shared libraries, compiled when you first built Android for your device, and are referenced by running processes such as MediaServer. Among these are a handful of precompiled libraries provided by Freescale – back on your host machine, these are under device/fsl/proprietary/. You won’t be able to see the source code for what’s in these libraries, because they’re closed-source; however, you can get an idea of who is calling on who with objdump.Ultimately, each of the OMX codecs are linked within these libraries to a VPU codec, which itself connects to libvpu.so. The codecs are hardware-accelerated, meaning that since they perform a complex job, it’s easier to offload it to the VPU to do it.

Page 10: Android cameraoverview

Kernel components- Android runs on a Linux kernel, and follows most of the rules that apply to normal Linux systems. So to communicate with the hardware, Android processes talk to device drivers, exposed as usual in /dev/.

-Android utilizes V4L2,an official linux api for video drivers . At any rate, both the camera and overlay drivers are V4L2-compliant. The device drivers are exposed in the filesystem as /dev/video0 (for the camera) and /dev/video16 (for the overlay device).  /dev/fb0 is the actual display device driver.

-The camera driver itself is in android/kernel_imx/drivers/media/video/boundary/.It’s a V4L2 module that is designed to work with the OV5642 cameras that we use with Nitrogen boards, and makes calls on the IPU driver to empty buffers that are filled by the camera.The source is in kernel_imx/drivers/mxc/vpu. There is also an IPU driver in kernel_imx/drivers/mxc/ipu, which does essentially the same thing for the IPU.

Page 11: Android cameraoverview

Function call lifecycleMediaRecorder.start()

-The start() native method is called through the JNI interface, and the native MediaRecorder object starts a transaction with its corresponding MediaRecorderClient class, notifying it that a start request has been made.

-The MediaRecorderClient calls its start() method, which in turn calls the start() method of the wrapped StagefrightRecorder class. Supposing that we have chosen to record an MPEG4 video, the startMPEG4Recording() method is called, and a new MPEG4Writer object is created with the file descriptor we previously passed to the top-level MediaRecorder object.

-The MPEG4Writer is set up with some initial parameters, including an encoder of some type, and then its own start() method is called. The parameters are copied, some header information is written, and then a writer thread is started, which calls the start() methods of a number of Tracks which wrap the AudioSource and CameraSource which were passed in previously.

-Each has its own start() method called – in the case of the CameraSource, it calls on the startRecording() method of the Camera object which it wraps, and that call proceeds down the chain as described above.

-At the CameraHAL layer, buffers are set aside and are filled with preview frames. The information is made available to the writer thread as a pointer to the memory where these frames can be found.

Page 12: Android cameraoverview

Thank You