Top Banner
Sensors 2015, 15, 29569-29593; doi:10.3390/s151129569 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Automated Low-Cost Smartphone-Based Lateral Flow Saliva Test Reader for Drugs-of-Abuse Detection Adrian Carrio 1, *, Carlos Sampedro 1 , Jose Luis Sanchez-Lopez 1 , Miguel Pimienta 2 and Pascual Campoy 1 1 Computer Vision Group, Centre for Automation and Robotics (UPM-CSIC), Calle José Gutiérrez Abascal 2, Madrid 28006, Spain; E-Mails: [email protected] (C.S.); [email protected] (J.L.S.-L.); [email protected] (P.C.) 2 Aplitest Health Solutions, Paseo de la Castellana 164, Madrid 28046, Spain; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +34-913-363-061; Fax: +34-913-363-010. Academic Editor: Gonzalo Pajares Martinsanz Received: 31 August 2015 / Accepted: 16 November 2015 / Published: 24 November 2015 Abstract: Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostic tools. Obtaining a result is usually subject to visual interpretation of colored areas on the test by a human operator, introducing subjectivity and the possibility of errors in the extraction of the results. While automated test readers providing a result-consistent solution are widely available, they usually lack portability. In this paper, we present a smartphone-based automated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensive light box and a smartphone device. Test images captured with the smartphone camera are processed in the device using computer vision and machine learning techniques to perform automatic extraction of the results. A deep validation of the system has been carried out showing the high accuracy of the system. The proposed approach, applicable to any line-based or color-based lateral flow test in the market, effectively reduces the manufacturing costs of the reader and makes it portable and massively available while providing accurate, reliable results. Keywords: smartphone; drugs-of-abuse; diagnostics; computer vision; machine learning; neural networks
25
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: sensors-15-29569

Sensors 2015, 15, 29569-29593; doi:10.3390/s151129569OPEN ACCESS

sensorsISSN 1424-8220

www.mdpi.com/journal/sensors

Article

Automated Low-Cost Smartphone-Based Lateral Flow SalivaTest Reader for Drugs-of-Abuse DetectionAdrian Carrio 1,*, Carlos Sampedro 1, Jose Luis Sanchez-Lopez 1, Miguel Pimienta 2 andPascual Campoy 1

1 Computer Vision Group, Centre for Automation and Robotics (UPM-CSIC),Calle José Gutiérrez Abascal 2, Madrid 28006, Spain; E-Mails: [email protected] (C.S.);[email protected] (J.L.S.-L.); [email protected] (P.C.)

2 Aplitest Health Solutions, Paseo de la Castellana 164, Madrid 28046, Spain;E-Mail: [email protected]

* Author to whom correspondence should be addressed; E-Mail: [email protected];Tel.: +34-913-363-061; Fax: +34-913-363-010.

Academic Editor: Gonzalo Pajares Martinsanz

Received: 31 August 2015 / Accepted: 16 November 2015 / Published: 24 November 2015

Abstract: Lateral flow assay tests are nowadays becoming powerful, low-cost diagnostictools. Obtaining a result is usually subject to visual interpretation of colored areas on the testby a human operator, introducing subjectivity and the possibility of errors in the extractionof the results. While automated test readers providing a result-consistent solution arewidely available, they usually lack portability. In this paper, we present a smartphone-basedautomated reader for drug-of-abuse lateral flow assay tests, consisting of an inexpensivelight box and a smartphone device. Test images captured with the smartphone cameraare processed in the device using computer vision and machine learning techniques toperform automatic extraction of the results. A deep validation of the system has beencarried out showing the high accuracy of the system. The proposed approach, applicableto any line-based or color-based lateral flow test in the market, effectively reduces themanufacturing costs of the reader and makes it portable and massively available whileproviding accurate, reliable results.

Keywords: smartphone; drugs-of-abuse; diagnostics; computer vision; machine learning;neural networks

Page 2: sensors-15-29569

Sensors 2015, 15 29570

1. Introduction

Most rapid tests or qualitative screening tests on the market are chromatographic immunoassays.They are used to detect the presence or absence of a substance (analyte) in an organic sample. The resultis obtained in a few minutes and without the need of specialized processes or equipment. The use ofthis kind of test is an aid in the rapid diagnose of different diseases (i.e., HIV, hepatitis, malaria, etc.) orcertain physiological conditions (pregnancy, drugs-of-abuse, blood glucose levels, cholesterol, etc.).

In the particular case of drug-of-abuse detection, tests are commonly based on the principle ofcompetitive binding: drugs that may be present in the organic sample (i.e., urine, saliva, etc.) competeagainst a drug conjugate, present in the test strip, for the specific binding sites of the antibody. During thetest procedure, the sample migrates along the test strip by capillary action.

If a substance present in the sample is available in a concentration lower than the cutoff level, it willnot saturate the binding sites of the particles coated with the antibody on the test strips. The coatedparticles will be then captured by the immobilized conjugate of each substance (drug), and a specificarea in the strip will be visibly colored. No coloration will appear in this area if the concentration ofthe substance is above the cutoff level, as it will saturate all of the binding points of the specific antigenfor such a substance. An additional control area is usually disposed in the test strip and colored uponeffective migration through the initial test area, to confirm the validity of the test result.

Most of the results obtained using the commercially-available test kits are interpreted visually by ahuman operator, either by the presence or absence of colored lines (Figure 1) or by comparison of thechanges in color of particular areas of a test strip against a pattern (Figure 2). Some of the problemsarising from the use of rapid/screening tests are the following:

• Interpretation: This is done by direct visual inspection, and thus, the results interpretation mayvary depending on the human operator (training, skills, lighting conditions, etc.). Normally, underwell-lit environments, there are less interpretation errors than when this kind of test is used underpoor light conditions, as this may affect the ability of the operator to judge the result correctly (i.e.,testing for drugs during a roadside control in the middle of the night).

• Confirmation is required: Any results obtained should be confirmed using a technique witha higher specificity, such as mass spectrometry, specially when presumptive positive resultsare obtained.

• Conditions under which the tests are performed: Rapid tests are used in conditions where there isno availability of specialized equipment.

• Dispersion and structure of the data: Test results and subject data are scattered and stored, usuallyin paper-based format.

• Processing and analysis of data: Data for decision-making are gathered and analyzed manually byhuman operators, being prone to human errors.

While most lateral flow tests are intended to operate on a purely qualitative basis, it is possible tomeasure the color intensity of the test lines in order to determine the quantity of analyte in the sample.Computer vision has been proven as a useful tool for this purpose, as capturing and processing testimages can provide objective color intensity measurements of the test lines with high repeatability.

Page 3: sensors-15-29569

Sensors 2015, 15 29571

By using a computer vision algorithm, the user-specific bias is eliminated (ability to interpret), whichmay affect the result obtained.

Smartphones’ versatility (connectivity, high resolution image sensors and high processingcapabilities, use of multimedia contents, etc.) and performance condensed in small and lightweightdevices, together with the current status of wireless telecommunication technologies, exhibit a promisingpotential for these devices to be utilized for lateral flow tests interpretation, even in the least developedparts of the world.

(a) Line-based test interpretation

(b) Negative (c) Positive (d) Invalid

Figure 1. (a) Line-based test interpretation samples; (b–d) Strip samples.

(a) Saliva alcohol color chart (b) Example of positive alcohol strip

Figure 2. (a) Color chart to obtain the relative blood alcohol concentration by comparisonwith the colored area in the test strip; (b) test sample including an alcohol strip.

In this paper, we present a novel algorithm to qualitatively analyze lateral flow tests using computervision and machine learning techniques running on a smartphone device. The smartphone and the testsare contained on a simple hardware system consisting of an inexpensive 3D-printed light box.

The light box provides controlled illumination of the test during the image capture process whilethe smartphone device captures and processes the image in order to obtain the result. Test data can bethen easily stored and treated in a remote database by taking advantage of the smartphone connectivitycapabilities, which can help to increase the efficiency in massive drug-of-abuse testing, for example inroadside controls, prisons or hospitals.

This approach effectively reduces the manufacturing costs of the reader, making it more accessibleto the final customer, while providing accurate, reliable results. To the authors’ knowledge, the

Page 4: sensors-15-29569

Sensors 2015, 15 29572

interpretation of lateral flow saliva tests for drug-of-abuse detection using computer vision and machinelearning techniques on a smartphone device is completely novel.

The remainder of the paper is organized as follows. Firstly, a review of the state of the art forhand-held diagnostic devices, in general, and for drug-of-abuse lateral flow readers, in particular, ispresented. Secondly, the saliva test in which the solution was implemented is introduced. Thirdly, thesoftware and hardware solutions proposed are described. Fourthly, the methodology for validating theresults is discussed. Then, the results of the evaluation through agreement and precision tests are shown.Finally, conclusions are presented.

2. State of the Art

Martinez et al. [1] used paper-based microfluidic devices for running multiple assays simultaneouslyin order to clinically quantify relevant concentrations of glucose and protein in artificial urine.The intensity of color associated with each colorimetric assay was digitized using camera phones.The same phone was used to establish a communications infrastructure for transferring the digitalinformation from the assay site to an off-site laboratory for analysis by a trained medical professional.

A lens-free cellphone microscope was developed by Tseng et al. [2] as a mobile approach to provideinfectious disease diagnosis from bodily fluids, as well as rapid screening of the quality of waterresources by imaging variously-sized micro-particles, including red blood cells, white blood cells,platelets and a water-borne parasite (Giardia lamblia). Improved works in this field were published byZhu et al. [3], who also developed a smartphone-based system for the detection of Escherichia coli [4]on liquid samples in a glass capillary array.

Matthews et al. [5] developed a dengue paper test that could be imaged and processed by asmartphone. The test created a color on the paper, and a single image was captured of the test resultand processed by the phone, quantifying the color levels by comparing them with reference colors.

Dell et al. [6] presented a mobile application that was able to automatically quantify immunoassaytest data on a smartphone. Their system measured both the final intensity of the capture spot and theprogress of the test over time, allowing more discriminating measurements to be made, while showinggreat speed and accuracy. However, registration marks and an intensity calibration pattern had to beincluded in the test to correctly process the image, and also, the use of an additional lens was requiredfor image magnification.

Uses of rapid diagnostic test reader platforms for malaria, tuberculosis and HIV have been reported byMudanyali et al. [7,8]. Smartphone technology was also used to develop a quantitative rapid diagnostictest for multi-bacillary leprosy, which provided quantifiable and consistent data to assist in the diagnosisof MBleprosy.

A smartphone-based colorimetric detection system was developed by Shen et al. [9], togetherwith a calibration technique to compensate for measurement errors due to variability in ambient light.Oncescu et al. [10] proposed a similar system for the detection of biomarkers in sweat and saliva.Similar colorimetric methods for automatic extraction of the result in ELISA plates [11] and proteinuriain urine [12] have also been reported. However, none of those works presented developments on coloredline detection.

Page 5: sensors-15-29569

Sensors 2015, 15 29573

In the area of drug-of-abuse detection, accurate confirmatory results are nowadays obtained ina laboratory usually by means of mass spectrometry techniques. These laboratory-based solutions areexpensive and time consuming, as the organic sample has to be present in the laboratory in order toperform the analysis. In contrast, screening techniques provide in situ, low-cost, rapid presumptiveresults with a relatively low error rate, with immunoassay lateral flow tests currently being the mostcommon technique for this type of test. Nonetheless, as most lateral flow tests operate on a purelyqualitative basis, obtaining a result is usually subject to the visual interpretation of colored areas on thetest by a human operator, therefore introducing subjectivity and the possibility of human errors to thetest result. Hand-held diagnostic devices, known as lateral flow assay readers, are widely used to provideautomated extraction of the test result.

VeriCheck [13] is an example of an automated reader for lateral flow saliva tests consisting of the useof a conventional image scanner and a laptop. However, this solution is far from portable, as it consistsof multiple devices and requires at least a power outlet for the scanner.

DrugRead [14] offers a portable automated reader solution on a hand-held device. However, oursystem offers a low-cost, massively available solution, as it has been implemented on a commonsmartphone device.

3. Saliva Test Description

The proposed image processing methodology can be applied to any line presence or absence-basedor color interpretation-based immunoassay tests on the market. For the results and validation presented,the rapid oral fluid drug test DrugCheck SalivaScan [15], manufactured by Express Diagnostics Inc.(Minneapolis, MN, USA), was used. DrugCheck SalivaScan is an immunoassay for rapid qualitativeand presumptive detection of drugs-of-abuse in human oral fluid samples.

The device is made of one or several strips of membrane incorporated in a plastic holder, as shown inFigure 3. For sample collection, a swab with a sponge containing an inert substance that reduces salivaviscosity is used. A saturation indicator is placed inside the collection swab to control the volume ofsaliva collected and to provide the indication to start the reaction (incubation) of the rapid test. The testmay contain any combination of the parameters/substances and cutoff levels as listed in Table 1.

Table 1. Calibrators and cutoff levels for different line-based drug tests.

Test Calibrator Cutoff Level (ng/mL)

Amphetamine (AMP) D-Amphetamine 50Benzodiazepine (BZO) Oxazepam 10Buprenorphine (BUP) Buprenorphine 5Cocaine (COC) Benzoylecgonine 20Cotinine (COT) Cotinine 50EDDP (EDDP) 2-Ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine 20

Page 6: sensors-15-29569

Sensors 2015, 15 29574

Table 1. Cont.

Test Calibrator Cutoff Level (ng/mL)

Ketamine (KET) Ketamine 50Marijuana (THC) 11-nor-∆9-THC-9 COOH 12Marijuana (THC) ∆9-THC 50Methadone (MTD) Methadone 30Methamphetamine (MET) D-Methamphetamine 50Opiates (OPI) Opiates 40Oxycodone (OXY) Oxycodone 20Phencyclidine (PCP) Phencyclidine 10Propoxyphene (PPX) Propoxyphene 50Barbiturate (BAR) Barbiturate 50

Figure 3. DrugCheck SalivaScan test. During the test procedure, the collection swab (right)will be inserted into the screening device (left).

Additionally, the test may include a strip for the detection of the presence of alcohol (ethanol) inoral fluid, providing an approximation of the relative blood alcohol concentration. When in contact withsolutions of alcohol, the reactive pad in the strip will rapidly turn colors depending on the concentrationof alcohol present. The pad employs a solid-phase chemistry that uses a highly specific enzyme reaction.The detection levels of relative blood alcohol concentration range between 0.02% up to 0.30%.

3.1. Test Procedure

The first step in the test procedure consists of saturating the saliva test sponge. For this, the donorsweeps the inside of the mouth (cheek, gums, tongue) several times using a collection swab and holds itin his or her closed mouth until the color on the saturation indicator strip appears in the indicator window.

The collection swab can then be removed from the mouth and inserted into the screening device. Oncethe specimen is dispersed among all strips, the device should be set and kept upright on a flat surface

Page 7: sensors-15-29569

Sensors 2015, 15 29575

while the test is running. After use, the device can be disposed of or sent to a laboratory for confirmationon a presumptive positive result.

3.2. Interpretation of Results

In the case of non-alcohol strips, interpretation is based on the presence or absence of lines.Two differently-colored lines may appear in each test strip, a control line (C) and a test line (T), leadingto different test results, as shown in Figure 1. The areas where these lines may appear are called thecontrol region and the test region, respectively.

Negative results can be read as soon as both lines appear on any test strip, which usually happenswithin 2 min. Presumptive positive results can be read after 10 min. Three possible results may be obtained:

1. Positive: Only one colored line appears in the control region. No colored line is formed in the testregion for a particular substance. A positive result indicates that the concentration of the analytein the sample exceeds the cutoff level.

2. Negative: Two colored lines appear on the membrane. One line is formed in the control regionand another line in the test region for the corresponding substance. A negative result indicates thatthe analyte concentration is below the cutoff level.

3. Invalid result: No control line is formed. The result of any test in which there is no control lineduring the specified time should not be considered.

The intensity of the colored line in the test region may vary depending on the concentration ofthe analyte present in the specimen. Therefore, any shade of color in the test region should beconsidered negative.

In the case of saliva alcohol strips, the interpretation of the results should be made by comparing thecolor obtained in the reagent strip against a printed color pattern that is provided with the test (Figure 2).Alcohol strips must be read at 2 min, as pad color may change, and again, three possible results maybe obtained:

1. Positive: The test will produce a color change in the presence of alcohol in the sample. The colorintensity will range, being light blue at a 0.02% relative blood alcohol concentration and darkblue near a 0.30% relative blood alcohol concentration. An approximation of the relative bloodalcohol concentration within this range can be obtained by comparison with the provided colorpattern (Figure 2).

2. Negative: If the test presents no color changes, this should be interpreted as a negative result,indicating that alcohol has not been detected in the sample.

3. Invalid: If the color pad is already colored in blue before applying the saliva sample, the test shouldnot be used.

4. System Description

In the following section, a description of the saliva test reader system is presented. Firstly, imageacquisition aspects are discussed, including a description of the light box device used for illumination

Page 8: sensors-15-29569

Sensors 2015, 15 29576

normalization purposes. Secondly, the computer vision algorithms used in the image processingstage are described. Finally, the machine learning algorithms used for lateral histogram classificationare presented.

4.1. Image Acquisition

Image acquisition is an extremely important step in computer vision applications, as the quality of theacquired image will condition all further image processing steps. Images must meet certain requirementsin terms of image quality (blur, contrast, illumination, etc.). The positions of the camera (a mobiledevice in our case) and the object to be captured (here, the test) should remain in a constant relativeposition for the best results. However, contrary to traditional image processing applications, a mobiledevice is hand-held and, therefore, does not have a fixed position with respect to the test, which canlead to motion blur artifacts. Furthermore, mobile devices are used in dynamic environments, implyingthat ambient illumination has to be considered in order to obtain repeatable results regardless of theillumination conditions.

In order to minimize all image acquisition-related problems, a small light box was designed to keepthe relative position between the smartphone and the test approximately constant, while removingexternal illumination and projecting white light onto the test with an embedded electronic lightingsystem. The light box, with dimensions of 150 × 70 × 70 mm, is shown in Figure 4. It is very portable,weighting only 300 g, and it can manufactured at a low cost with a 3D printer.

Figure 4. Light box with embedded electronic lighting system, which minimizes therelative movement between the smartphone and the test and the effects of externalillumination changes.

In order to acquire an image, the saliva test is inserted into the light box; the lighting system isactivated using a mounted button, and the smartphone is attached to the light box. The smartphoneapplication has an implemented timer, which allows one to measure the elapsed time and to providethe user with a result as soon as it is available. The test reader provides a result by using a singlecaptured image.

Three smartphone devices were selected for capturing and processing the test images, taking intoaccount their technical specifications and the mobile phone market share: Apple iPhone 4, 4S and 5.

Page 9: sensors-15-29569

Sensors 2015, 15 29577

For the purpose of implementing and testing the computer vision and machine learning algorithms,a total of 683 images, containing a total of 2696 test strips, were acquired with the mentionediPhone models.

4.2. Image Preprocessing and Strip Segmentation

Even with an elaborated approach for the image capture procedure, further image processing stageshave to deal with image noise and small displacements and rotations of the test within the image, whichcan be caused by many factors. Just to highlight a few, smartphones might have a loose fit in the lightbox fastening system; the in-built smartphone cameras come in a variety of resolutions and lenses;furthermore, there might be slight differences in the brightness of the saliva strip’s material.

Once the image has been acquired, the first step is to localize in the image the region correspondingto the strips. For this purpose, this area is manually defined in a template image, which is stored ina database. This template image is only defined once during the implementation process and is valid forall tests sharing the same format, independent of the number of strips and the drug configuration.

For defining the area corresponding to the strips in the actual processed image, a homographyapproach based on feature matching is used. For this purpose, the first step, which is done off-line,consists of calculating keypoints in the template image and extracting the corresponding descriptors.In this approach and with the aim of being computationally efficient, ORB (oriented FAST (Featuresfrom Accelerated Segment Test) and rotated BRIEF (Binary Robust Independent Elementary Features))features are computed, and their descriptors are extracted. In [16], it is demonstrated how ORB is twoorders of magnitude faster than SIFT. This is crucial in this kind of device for real-time processing.

ORB uses the well-known FAST keypoints [17,18] with a circular radius of nine (FAST-9) andintroduces orientation to the keypoint by measuring the intensity centroid (oFAST). Then, the BRIEFdescriptor [19] is computed, which consists of a bit string description of an image patch constructed froma set of binary intensity tests, using in this case a learning method for extracting subsets of uncorrelatedbinary tests (rBRIEF). The combination of oFAST keypoints and rBRIEF features conforms the finalORB descriptor and makes ORB features rotation invariant and robust to noise. Once the ORB keypointsand their descriptors have been extracted from the template image, they are stored in the smartphone.

When a new image is captured from the device, the first step consists of extracting the ORB keypointsand their descriptors and matching them with the ones extracted from the template (Figure 5), whichwill provide the homography between both images, that is the transformation that converts a point in thetemplate image to the corresponding point in the current image.

The matching process is divided into three steps:

• First, a brute force matching is computed. In this process, the descriptors are matched accordingto their Hamming distance, selecting for the next stage the ones that have the minimumHamming [20] distance between them.

• Second, a mutual consistency step is done for removing those matches that do not corresponduniquely to their counterparts in the other image.

Page 10: sensors-15-29569

Sensors 2015, 15 29578

• Finally, with the point pairs from the previous step, a homography transformation is computed.In this step, a random sample consensus (RANSAC) [21] method is used for removing the onesthat do not fit the rigid perspective transform, which are called outliers.

Once the homography between the template image and the current image has been computed, thistransformation is applied to the selected four points in the template image that define the region ofinterest (ROI) of the strips (the area within the green border in Figure 5). This feature matching-basedhomography approach successfully deals with image noise, small input image displacements androtations and different image resolutions.

Figure 5. Matching process between oriented FAST and rotated BRIEF (ORB) descriptorsin the template image (left) and original image (right). White lines denote the matchedpoints. In green is depicted the searched region of interest that contains the strips in theactual processed image.

The next stage in the strip segmentation process is to localize the colored area of each strip in theimage. The drug strip configuration of the test is known in advance through a unique batch numberprovided by the test manufacturer, so it is only required to check that the number of detected strips inthe image matches the test configuration and to localize these strips in the image. As the interpretationmethod for the tests containing an alcohol strip is specific to this drug, images containing an alcohol stripwill be processed differently. According to this, the process for segmenting and localizing the coloredarea of each strip is as follows:

• If the image contains an alcohol strip, a thresholding procedure by the Otsu [22] method is appliedon the R, G and B channels of the image ROI given by the homography. After that, two ANDoperations between Channels G and B, and G and R, are applied with the purpose of filteringthe image.

• If the image does not contain an alcohol strip, the image is thresholded using the Otsu method inthe R and B channels, and finally, an OR operation between these images is applied.

Page 11: sensors-15-29569

Sensors 2015, 15 29579

Finally, in both procedures, a morphological closing operation is applied to the resulting binary imageof the previous steps. Then, in order to remove isolated pixels, a filter based on the number of pixels ineach column is computed. For each of the columns, if the number of non-zero pixels is less than 10%of the total number of pixels in that column, the column is set to zero. An example of the result of thesegmentation process on a test image is shown in Figure 6a.

The final step in the segmentation process consists of a post-processing stage, in which thecontours of the previous binary image are computed. In this stage, several filters are applied to thecomputed contours:

• First, a position-based filter is applied. The purpose of this filter is to remove those contours whosecentroid is located under the lower half of the computed ROI (area bounded in green in Figure 5)of the image. This is useful to filter out spurious contours, as the ones bounded in red in Figure 6a.

• Second, we apply a filter based on the area enclosed by each of the computed contours. Based onthis, we remove those contours that satisfy: Ai < 0.5 · Amaxcontour, where Ai is the area of thecontour i and Amaxcontour is the area of the largest contour. Again, the objective is to filter out smallspurious contours, which do not correspond to the colored regions of each strip, which indicate thetype of drug.

By applying the explained segmentation process, the colored region of each strip, which indicatesthe type of drug of each test strip, is finally localized within the ROI image, as shown in Figure 6b.For each of these extracted ROIs, its size is used together with the position of its bottom-middle pointas parameters to automatically determine the region on which the lateral histogram will be computed,bounded in green in Figure 6b. By limiting the extraction of the lateral histogram to this area, theproblems of pixel intensity variations due to the shadows of the edges of the strip is minimized.

(a) (b)

Figure 6. (a) Result of the segmentation process on a test image. Spurious contours,bounded in red, are filtered out during the post-processing stage; (b) Localization of thecolored area of the strips after applying the segmentation process to a test image, boundedwith a red rectangle. For each of these regions, its size is used together with the positionof its bottom-middle point (depicted in blue) as parameters to automatically determine theregion on which the lateral histogram will be computed, bounded with a green rectangle.

Page 12: sensors-15-29569

Sensors 2015, 15 29580

4.3. Lateral Histogram Extraction and Preprocessing

The regions of interest that represent each segmented strip in the previous section need to be processedbefore the classification step takes place. Due to the different way in which alcohol and non-alcohol stripsshould be interpreted, both require different preprocessing stages.

In the case of a non-alcohol strip, the preprocessing stage consists of four steps described below,whose mission is to simplify the information given to the classifier as much as possible to increase itsefficiency. All of these steps are done for each non-alcohol segmented strip.

The first step is the lateral histogram extraction from the area extracted during the segmentation stage.The lateral histogram is computed for each of these areas, where the average pixel intensity value foreach image row (strip transversal direction) is computed according to:

xHG(i) =1

n

∑∀j

s(i, j) (1)

where xHG(i) is the lateral histogram extracted value in the i coordinate; s(i, j) is the (i, j) pixel intensityvalue; i are image rows (strip transversal direction); j are image columns (strip longitudinal direction);and n is the number of image columns inside the lateral histogram area.

Due to differences of image acquisition, raw lateral histograms do not usually have the same amountof bins. Therefore, an adjustment in the lateral histogram number of bins is set to an arbitrary numbernbin = 100 through the use of quadratic interpolation. With this step, we ensure that the lateral histogramalways has the same number of bins (set to nbin = 100), independent of the image used.

Then, to minimize the effect of the light intensity changes in the same segmented strip and betweentwo different images, a RANSAC technique is applied. The proposed model to be fitted is a linear modely = a · i + b, where y is the lateral histogram value; i is the coordinate; and a and b are the estimatedparameters. Then, the lateral histogram is recalculated:

xHR(i) = xHN(i)− (a · i+ b) (2)

where xHN(i) is the lateral histogram value in the i coordinate after the adjustment to nbin = 100.As the lateral histogram peaks, which have the information of the test and control regions, are clearly

separated, the fourth and last pre-processing step tries to exploit this information by lining up thesepeaks. The histogram xHR is divided into two similar parts. The first part uses bins from one tonbin

2= 50, while the last part uses bins from nbin

2= 50 to nbin = 100. The minima for each part,

hmin1(i) and hmin2(i), respectively, are computed. The final lateral histogram hx is the conjunction ofboth minimum values with a range of 15 bins in each direction. That means the final lateral histogramhx has nbin−final = 62 bins. Note that the first 31 bins will correspond to the control line (C), and theremaining bins will correspond to the test line (T).

In the case of an alcohol strip, a different processing is done: the first and the second steps areanalogous, but with nbin = 62. Then, a third step to reduce the light intensity differences betweentwo different images is done. The average of only the first nav = 15 values of the lateral histogram iscalculated and then subtracted from the original lateral histogram.

hx(i) = xHN(i)−1

nav

nav∑i=1

xHN(i) (3)

Page 13: sensors-15-29569

Sensors 2015, 15 29581

where hx is the lateral histogram value adjusted to nbin = 62 bins with no light intensity influencebetween two different images. This normalization is done only with respect to the first 15 values of thelateral histogram, because this part of the histogram has proven to account well for illumination changesalong the strip. The normalization with respect to the mean of these 15 values helps to minimize theinfluence of differences in illumination along the strip.

4.4. Lateral Histogram Classification and Test Outcome

Three different supervised machine learning classifiers based on artificial neural networks (ANN)have been implemented for lateral histogram classification:

• A classifier for alcohol strip lateral histograms.• A classifier for control lines in non-alcohol strip lateral histograms.• A classifier for test lines in non-alcohol strip lateral histograms.

By dividing the classification task into three steps through the use of different classifiers, a betterperformance can be achieved due to its forced specialization. All three classifiers have the same kind ofmodel, and only their structure (their size) and their parameters are different from one another.

Machine learning algorithms require a correct sample dataset in order to be trained. Supervised algorithmsrequire examples of each of the desired classes to be learned in order to perform classification tasks.

The parameters of the classifier need to be set after the best structure is selected through across-validation algorithm. The available dataset is randomly, but uniformly divided into three sub-setsdepending on its functionality. In our case, “training data” correspond to 70% of the available data andare used to adjust the parameters of the model; “validation data”, 15%, are used to check the modelparameters, ensuring the correct training; finally, “test data”, 15%, test the performance of the classifier.

Five sequential algorithms have been considered for generating the classifiers. firstly, an unsupervisedinput data normalization is performed:

xnormalized(i) =hx(i)− xtrain(i)

std(xtrain(i))(4)

where hx(i) is the preprocessed lateral histogram value in the coordinate i; the subscript train indicatesthe “training data” set; xtrain(i) stands for the mean value; and std stands for the standard deviation.

Secondly, an unsupervised data mapping is done:

xmapped(i) = 2 · xnormalized(i)−minm,t(i)

maxm,t(i)−minm,t(i)− 1 (5)

where minm,t(i) and maxm,t(i) are the minimum and maximum values used for the mapping in the i

coordinate and calculated with the “training data” set.Then, a supervised ANN, multi-layer perceptron (MLP) [23], is first trained and afterwards executed

to classify the data. This ANN is composed by several layers, called the “input layer”, “hidden layers”and the “output layer”, depicted in Figure 7. Each layer (except the “input layer”, which is only the inputto the ANN) is formed by several neurons. Each neuron has a single output; multiple inputs (all of theoutputs of the neurons of the previous layer); a weight value associated with each input; and a bias value.

Page 14: sensors-15-29569

Sensors 2015, 15 29582

The behavior of each neuron, i.e., how its output is computed based on its inputs and internal parameters,is given by:

output = tansig

(∑∀i

input(i) · weight(i) + bias

)(6)

where tansig(x) is the hyperbolic tangent sigmoid function.After the input ~xNN = ~xmapped is introduced to the ANN, it generates an output vector ~yNN .

The number of neurons and its configuration (i.e., the structure of the classifier) and the weights andbiases of each neuron (i.e., the parameters of the classifier) are calculated using the “training data” set.

Figure 7. ANN-MLP structure diagram. Each neuron is represented by a circle, whosebehavior is explained by Equation (6). The connections between the inputs, the neurons andthe outputs are represented with arrows.

In the next step, an unsupervised output data reverse mapping is done:

yrev−mapped(k) =yNN(k) + 1

2(maxr,t(k)−minr,t(k))+minr,t(k) (7)

where maxr,t(k) and minr,t(k) are the maximum and minimum values used for reverse mapping in thek coordinate of the output and calculated with the “training data” set.

Finally, an unsupervised thresholding operation generates the classification output. The maximumof the reverse mapped output is calculated: ycandidate = max(~yrev−mapped). Then, the thresholdingoperation is applied:

yclass =

k, if ycandidate ≥ ythres(k)

Undetermined, otherwise(8)

This last step ensures that the algorithm gives only a trained output as a classification result if itsconfidence level is high enough, above a certain threshold. Otherwise, it will show a conservativebehavior, outputting “undetermined” as the classification result.

4.4.1. Classifier for Alcohol Strip Lateral Histograms

The input to this classifier is the 62-bin preprocessed lateral histogram for alcohol strips. The outputof the trained classifier is a value that indicates the result of the test. The alcohol strips used in these

Page 15: sensors-15-29569

Sensors 2015, 15 29583

experiments allow for the detection of five alcohol levels in saliva, depicted in Figure 2a. However,only three output categories have been considered: “positive (1)”, “negative (2)” or “undetermined (3)”.The reason for this simplification can be found in the market demand, where saliva-based tests competewith other diagnostic technologies. Breath-based analyzers are generally used for measuring bloodalcohol content in massive testing, such as roadside tests, where a high accuracy is expected andexpensive equipment can be used. However, low-cost saliva-based alcohol tests are used in situationswhere only a binary output is necessary, such as detoxification clinics.

Due to the good separability between lateral histograms, these can be classified directly into the threeoutput categories. The available complete dataset for this classifier is shown in Figure 8.

Figure 8. Lateral histograms used as the dataset for the classifier for the alcoholstrip. Samples labeled as “negative” are represented in red, while “positive” samples arerepresented in blue. Note that the separability between classes is very good, and lateralhistograms can be easily classified into three output categories (“positive”, “negative”or “undetermined”).

After an intensive training and output data analysis, it was determined that the best results wereachieved using a classifier structure consisting of an ANN structure with 62 inputs, five neurons in asingle hidden layer and two output neurons.

The evaluation of the classifier in the three data sub-sets after the structures and parametersare calculated is shown in Table 2. The performance of the classifier is excellent, showing noincorrect classifications.

Page 16: sensors-15-29569

Sensors 2015, 15 29584

Table 2. Evaluation of the classifier for alcohol strip lateral histograms. Note that confusionmatrices have dimensions of 2 × 3, because “undetermined” samples were never consideredas an input. “Undetermined” samples are just the classification output when a certainconfidence level is not reached.

Success Confusion Matrix (P, N, U)

Training data(238 samples)

100%

[52 0 0

0 186 0

]Validation data(50 samples)

100%

[12 0 0

0 38 0

]Test data

(50 samples)100%

[10 0 0

0 40 0

]

4.4.2. Classifier for Control Lines in Non-Alcohol Strip Lateral Histograms

The inputs to this classifier are the first 31 bins of the preprocessed lateral histogram of a non-alcoholstrip (inside a black box in Figure 9). The output of the trained classifier can be “valid (1)”, “invalid (2)”or “undetermined (3)”. The complete dataset available for this classifier is shown in Figure 9.

Figure 9. Lateral histograms used as the dataset for the classifier for control lines fornon-alcohol strip lateral histograms. Samples labeled as “invalid” are represented in green,while “valid” samples are represented in blue. Note that only the first 31 bins of each lateralhistogram are used by this classifier. It can also be observed that the separability betweenclasses is very good, allowing the use of only three classes in this classifier (“valid”, “invalid”or “undetermined”).

After training and analyzing the output data, it was determined that the best results were achievedusing a classifier structure consisting of an ANN structure with 31 inputs, a single neuron in a singlehidden layer and two output neurons.

The evaluation of the classifier in the three data sub-sets after the structures and parameters arecalculated is shown in Table 3. Again, the performance of the classifier is excellent, showing noincorrect classifications.

Page 17: sensors-15-29569

Sensors 2015, 15 29585

Table 3. Evaluation of the classifier for control lines in non-alcohol strip lateral histograms.Note that confusion matrices have dimensions of 2 × 3 because “undetermined” sampleswere never considered as an input. “Undetermined” samples are just the classification outputwhen a certain confidence level is not reached.

Success Confusion Matrix (V, I, U)

Training data(1660 samples)

100%

[1396 0 0

0 264 0

]Validation data(349 samples)

100%

[293 0 0

0 56 0

]Test data

(349 samples)100%

[293 0 0

0 56 0

]

4.4.3. Classifier for Test Lines in Non-Alcohol Strip Lateral Histograms

The inputs to this classifier are the last 31 bins of the preprocessed lateral histogram of a non-alcoholstrip (inside a black box in Figure 10). The output of the trained classifier can fall into six differentcategories: “very positive (1)”, “positive (2)”, “doubtful (3)”, “negative (4)”, “very negative (5)” or“undetermined (6)”. Although there are only three possible output test results, “positive”, “negative” or“undetermined”, the internal use of a larger number of categories (see Figure 11) by the classifier allowsone to have enhanced control of the treatment of doubtful cases, in order to adjust for a desired falsepositive or false negative ratio. The available complete dataset for this classifier is shown in Figure 10.

Figure 10. Lateral histogram samples used as training data for the classifier for test linesin non-alcohol strip lateral histograms. Samples labeled as “very negative” are representedin red; “negative” in magenta; “doubtful” in black; “positive” in cyan; and “very positive”in blue. Note that only the last 31 bins of each lateral histogram are used by this classifier.It can also be observed that the separability between classes is not very good, which justifiesthe use of six classes in this classifier (“very positive”, “positive”, “doubtful”, “negative”,“very negative” or “undetermined”).

Page 18: sensors-15-29569

Sensors 2015, 15 29586

(a) Very positive (b) Positive

(c) Doubt (d) Negative

(e) Very negative

Figure 11. Lateral histograms representing a prototype of each labeled class used for theclassifier for test lines in non-alcohol strips.

Page 19: sensors-15-29569

Sensors 2015, 15 29587

Once the training procedure was completed and the output data were analyzed, it was determined thatthe best results were achieved using a classifier structure consisting of an ANN structure with 31 inputs,two hidden layers with seven neurons each and five output neurons. The evaluation of the classifierin the three data sub-sets after the structures and parameters are calculated is shown in Table 4. Theperformance of the classifier is very good, showing only a few errors between adjacent labeled classes.

Table 4. Evaluation of the classifier for the test lines in non-alcohol strip lateral histograms.Note that confusion matrices have dimensions of 5 × 6 because “undetermined” sampleswere never considered as an input. “Undetermined” samples are just the classification outputwhen a certain confidence level is not reached.

SuccessConfusion Matrix

(VP, P, D, N, VN, U)

Training Data(1369 samples)

100%

459 0 0 0 0 0

0 122 0 0 0 0

0 0 200 0 0 0

0 0 0 258 0 0

0 0 0 0 330 0

Validation data(293 samples)

91.809%

94 2 0 0 0 0

5 19 2 0 0 0

0 4 39 4 0 0

0 0 2 49 3 0

0 0 0 2 68 0

Test data(293 samples)

89.761%

93 2 0 0 0 0

5 22 4 0 0 0

0 1 35 8 0 0

0 0 6 45 2 0

0 0 0 2 68 0

5. System Evaluation Methodology

A study was conducted by an external company to verify the performance of the application withregards to the two following aspects:

• Verify the agreement between visual results obtained by human operators and those obtained bythe test reader (agreement).

• Check the repeatability of the results interpreted by the test reader (precision).

The following values should be established as an outcome of the testing:

1. Provide the number of cases (%) in which the results obtained by the test reader agreed with theinterpretation made by an operator by visual interpretation, on a given representative sample size(N) (at least two or three independent operators should be considered, to minimize bias or theimpact of subjectivity).

Page 20: sensors-15-29569

Sensors 2015, 15 29588

2. Provide the number of cases (%) in which the test reader is able to repeat the same result (positive,negative or invalid) when interpreting any test, given a representative sample size (N).

Ninety SalivaScan double-sided tests were used (detection of six drugs + alcohol in oral fluid) withthe following configuration: amphetamine, ketamine, cocaine and methamphetamine on one side andopiates, marijuana and alcohol on the other side, with the cutoff levels indicated in Table 1. Threedetection levels were considered: negative, cutoff and 3× cutoff (saliva controls with three-times thecutoff concentration level).

In order to simulate the concentration on each detection level, positive and negative standard salivacontrols were used. Such controls were sourced from Synergent Biochem Inc. ( Hawaiian Gardens,CA, USA) [24].

The visual and automated results obtained for the alcohol (ALC) strip were not recorded, as theinterpretation for the results is made by comparison of the color changes in the reagent pad againsta pattern and not by the interpretation of the control and test lines.

The results interpreted by the test reader were extracted using an iPhone 4 (partial test) and an iPhone4S (full test), both running iOS Version 6.1.3.

5.1. Agreement Test

In the agreement test, the results of 30 tests per detection level were considered. Two human operatorsmade the visual interpretation of the results obtained on each test, and the second operator obtaineda result using the test reader in addition to his or her own interpretation. Each operator would log theresults independently from each other. The detailed steps followed on each test were the following:

1. The reaction on each test was started after adding the corresponding positive or negative controldepending on the cutoff level being tested.

2. Operator 1 waited 10 min for incubation of the result, interpreted it and logged the test outcome.3. Operator 2 interpreted the visual result, independently from Operator 1, and logged the test outcome.4. Operator 2 interpreted the result using the test reader and logged the result.

A retest was performed at the negative and 3× detection levels, using a different lot. On each level,20 tests were performed.

5.2. Precision Test

Test devices from the agreement evaluation were randomly selected, three tests from each detectionlevel (negative, cutoff, 3×). Test 1 corresponds to an (amphetamine (AMP), ketamine (KET), cocaine(COC), methamphetamine (MET)) configuration, while Tests 2 and 3 had an (opiates (OPI), THC, ALC)configuration. Each of these tests was processed 40 times repeatedly.

Page 21: sensors-15-29569

Sensors 2015, 15 29589

6. Results

6.1. Agreement Test Results

Agreement test results are summarized in Table 5. Results show that there are differences each timethe visual interpretation is made by each human operator. Results also show how the disagreementbetween expert operators can be substantial (20% to 30%), which proves that interpreting the lines is notan easy task, especially in doubtful cases, and that specific training is necessary for correct interpretation.Usually, these interpretation differences occur when test lines are faint or remains of the reagent arepresent on the test strips. In such cases, it is normal that there are doubts about the result obtained,and this situation occurs normally when interpreting results at the cutoff level in contrast to the resultsobtained at the negative and 3× levels, which are expected to be easier to classify. It should be notedthat the agreement on the THC strip in the levels of negative and 3× is very low, and not as expected(it should be near 100%), this was due to the fact that the results obtained on those levels showed a highnumber of faint test lines that caused doubts while the operators interpreted the results.

Table 5. Agreement test results summary. OP, operator. TR, test reader.

Agreement OP1 vs. OP2 Test/Retest AMP KET COC MET OPI THC Average

Total Agreement (%) 93/100 93/100 100/100 89/95 100/80 87/100 94/96Negative (%) 100/100 100/100 100/100 100/100 100/100 83/100 97/100Cutoff (%) 80/- 90/- 100/- 67/- 100/- 97/- 89/-

3× (%) 100/100 90/100 100/100 100/90 100/60 80/100 95/92

Agreement OP1 vs. TR Test/Retest AMP KET COC MET OPI THC AVG

Total Agreement (%) 87/100 92/100 100/95 91/100 98/95 94/98 93/98Negative (%) 100/100 100/100 100/100 100/100 100/100 100/100 100/100Cutoff (%) 60/- 93/- 100/- 73/- 93/- 85/- 84/-

3× (%) 100/100 83/100 100/90 100/100 100/90 96/95 97/96

Agreement OP2 vs. TR Test/Retest AMP KET COC MET OPI THC AVG

Total Agreement (%) 87/100 92/100 100/95 89/95 98/75 69/98 89/94Negative (%) 100/100 100/100 100/100 100/100 100/100 71/100 95/100Cutoff (%) 60/- 97/- 100/- 67/- 93/- 60/- 79/-

3× (%) 100/100 79/100 100/90 100/90 100/50 76/95 93/88

OP1 and OP2 vs. TR Test/Retest AMP KET COC MET OPI THC AVG

Total Agreement (%) 90/100 96/100 100/95 96/100 96/95 72/98 91/98Negative (%) 100/100 100/100 100/100 100/100 93/100 70/100 94/100Cutoff (%) 70/- 100/- 100/- 87/- 93/- 60/- 85/-

3× (%) 100/100 86/100 100/90 100/100 100/90 86/95 95/96

There was a large number of results obtained in the rapid test in which the test reader (TR) judged theresult as negative, while the operators (OPs) judged such results as positives (OPs positive–TR negative).This was caused by the fact that the test lines in some strips were very faint, which induced the operators

Page 22: sensors-15-29569

Sensors 2015, 15 29590

to judge the results incorrectly as positive when these should have been labeled as negative, as determinedby the test reader.

The cases when the operators judged the result as negative and the test reader interpreted it as positivecan be explained by the development of faint test lines.

There are discrepancies in the agreement between operators (OP1 vs. OP2) when interpreting theresults on the OPI strip, especially at the 3× detection level; this may be due to the appearance of stainsor color remaining on the reagent strip.

The agreement of Operator 2 is substantially lower than that of Operator 1. It can be seen thatOperator 2 had some doubts while interpreting the results for the OPI strip at the level of 3×.

As can be seen, all errors correspond to the strips tested where it was expected to obtain positiveresults (3×). At such levels, the test reader interpreted the result as positive, while the operators judgedthe result as negative. This discrepancy may probably be associated with the presence of color remainingon the strip, which might give the impression to the operators of a test line, while such lines did not havethe typical characteristics to be considered as test lines.

The cases in which both operators did not agree on the results obtained by the test reader can beclassified in the categories shown in Table 6, as well as the results as a percentage of the total number ofstrips evaluated.

Table 6. Disagreement and adjusted agreement test results summary.

Both Operators vs. Test Reader AMP KET COC MET OPI THC Total

DisagreementsOPs negative–TR positivecount (%)/retest count (%)

0 (0)/0 (0) 0 (0)/0 (0) 0 (0)/2 (5) 0 (0)/0 (0) 0 (0)/2 (5) 3 (3)/1 (3) 3 (1)/5 (2)

DisagreementsOPs positive–TR negativecount (%)/retest count (%)

9 (10)/0 (0) 4 (4)/0 (0) 0 (0)/0 (0) 4 (4)/0 (0) 2 (2)/0 (0) 20 (22)/0 (0) 39 (7)/0 (0)

Disagreements TR errorcount (%)/retest count (%)

0 (0)/0 (0) 0 (0)/0 (0) 0 (0)/0 (0) 0 (0)/0 (0) 2 (2)/0 (0) 2 (2)/0 (0) 4 (1)/0 (0)

Total disagreementscount/retest count

9/0 4/0 0/2 4/0 4/2 25/1 46/5

Total adjustedagreement (%)/retest (%)

100/100 100/100 100/95 100/100 98/95 94/98 99/98

Total testcount/retest count

89/40 89/40 89/40 89/40 89/40 89/40 534/240

Excluding the cases in which the operators indicated positive when the test reader indicated negativeand counting them as interpretation errors attributable to the operators, the total adjusted agreement(excluding cases when OPs positive and TR negative) has been computed and is shown in Table 6.

6.2. Precision Test Results

The results indicated in Table 7 were obtained once the tests were processed repeatedly using the testreader for each detection level. The cases in which the test reader interpreted the result as “negative”

Page 23: sensors-15-29569

Sensors 2015, 15 29591

when the operator interpreted it as “positive” correspond to strips showing very faint test lines, especiallyat “cutoff Test 1” and “3× Test 1” on the strips KET and OPI.

Table 7. Precision test results.

Detection Level No. Tests No. Strips per Side Total Strips 1 No. of Times That TR GaveThe Same Result for a Given Strip 2

Precision (%)

Negative test 1/2/3 40/40/40 4/2/2 156/78/78 156/78/78 100/100/100Precision for negative 312 312 100

Cutoff test 1/2/3 40/40/40 4/2/2 160/80/80 149/78/71 93/98/89Precision for cutoff 320 298 94

3× test 1/2/3 40/40/40 4/2/2 156/80/80 137/80/80 88/100/100Precision for 3× 316 297 94

Total 360 948 907 961 Total number of strips correctly interpreted; invalid strips or errors excluded; 2 in the case of different results,the maximum value was taken.

7. Conclusions

An innovative, low-cost, portable approach for the rapid interpretation of lateral flow saliva testresults in drug-of-abuse detection based on the use of commonly-available smartphone devices has beenpresented and evaluated. A small inexpensive light box is used to control image quality parametersduring the acquisition. This solution reuses an existing smartphone, and there is no additional equipmentneeded, besides the light box, which costs a fraction of the price for similar products on the market, withprices ranging around 3000 EUR/device. In order to segment the strips, images are first pre-processed tocorrect for small displacements and/or rotations with an ORB feature-based matching and homographystrategy, and the strips corresponding to the different substances to be detected are segmented usingcolor features and morphological operations. Finally, a lateral histogram containing the saliva testlines’ intensity profile is extracted. Lateral histograms are then classified with a machine learning-basedprocedure, including unsupervised data normalization and classification using a multilayer perceptronartificial neural network (MLP-ANN). The implemented solution can be adapted to any line-based orcolor-based lateral flow test on the market. System agreement and precision tests were run for systemevaluation, showing great agreement between the visual results obtained by human operators and thoseobtained by the test reader app, while showing high repeatability. The objective of the work is todemonstrate that the test reader is able to obtain the same result (or better) than a trained operator,therefore reducing the subjectivity of the analysis by standardizing the test interpretation conditions(illumination, test to image sensor distance, etc.) and by using a deterministic algorithm to obtain theresults. In this sense, any operator independent of his/her level of experience can rely on the resultsobtained by the test reader. The system is automatic, allowing one to systematize the collection andanalysis of the data in real time, removing the risks of manual results’ management and allowingfor centralized processing. The mentioned features can be very useful in places where there is noqualified staff and rapid detection is needed, such as on-site detection performed by a police officeror a first-aid operator.

Page 24: sensors-15-29569

Sensors 2015, 15 29592

Author Contributions

All authors have contributed to the development of the work presented in this paper, and they areits sole intellectual authors, all collaborating tightly in several parts for the final success of the system.Carlos Sampedro has been responsible for the computer vision part. Jose Luis Sanchez-Lopez has beenresponsible for the machine learning algorithms. Adrian Carrio has been responsible for the software andsystem integration. Miguel Pimienta has staged the problem and final objectives of the system. He hasdeveloped the light box, also providing the samples, and has been responsible for the external evaluationprocess. Pascual Campoy has acted as the general supervisor of the technical development of the project.

Conflicts of Interest

The system has been developed under a private contract between the company Vincilab HealthcareS.L. (Madrid, Spain) and the Universidad Politécnica de Madrid (UPM). The resulting procedure andsystem are patent pending, and the exploitation of their results is shared by the two companies splitfrom the original one, named Vincilab Healthcare S.L. and Aplitest Solutions S.L. (Madrid, Spain), bothretaining all of the industrial rights.

References

1. Martinez, A.W.; Phillips, S.T.; Carrilho, E.; Thomas, S.W.; Sindi, H.; Whitesides, G.M.Simple telemedicine for developing regions: Camera phones and paper-based microfluidic devicesfor real-time, off-site diagnosis. Anal. Chem. 2008, 80, 3699–3707.

2. Tseng, D.; Mudanyali, O.; Oztoprak, C.; Isikman, S.O.; Sencan, I.; Yaglidere, O.; Ozcan, A.Lensfree microscopy on a cellphone. Lab Chip 2010, 10, 1787–1792.

3. Zhu, H.; Yaglidere, O.; Su, T.W.; Tseng, D.; Ozcan, A. Cost-effective and compact wide-fieldfluorescent imaging on a cell-phone. Lab Chip 2011, 11, 315–322.

4. Zhu, H.; Sikora, U.; Ozcan, A. Quantum dot enabled detection of Escherichia coli usinga cell-phone. Analyst 2012, 137, 2541–2544.

5. Matthews, J.; Kulkarni, R.; Gerla, M.; Massey, T. Rapid dengue and outbreak detection with mobilesystems and social networks. Mob. Netw. Appl. 2012, 17, 178–191.

6. Dell, N.L.; Venkatachalam, S.; Stevens, D.; Yager, P.; Borriello, G. Towards a point-of-carediagnostic system: Automated analysis of immunoassay test data on a cell phone. In Proceedingsof the 5th ACM Workshop on Networked Systems for Developing Regions, New York, NY, USA,28 June 2011; pp. 3–8.

7. Mudanyali, O.; Dimitrov, S.; Sikora, U.; Padmanabhan, S.; Navruz, I.; Ozcan, A.Integrated rapid-diagnostic-test reader platform on a cellphone. Lab Chip 2012, 12, 2678–2686.

8. Mudanyali, O.; Padmanabhan, S.; Dimitrov, S.; Navruz, I.; Sikora, U.; Ozcan, A. Smart rapiddiagnostics test reader running on a cell-phone for real-time mapping of epidemics. In Proceedingsof the Second ACM Workshop on Mobile Systems, Applications, and Services for HealthCare,Toronto, ON, Canada, 6–9 November 2012.

9. Shen, L.; Hagen, J.A.; Papautsky, I. Point-of-care colorimetric detection with a smartphone.Lab Chip 2012, 12, 4240–4243.

Page 25: sensors-15-29569

Sensors 2015, 15 29593

10. Oncescu, V.; O’Dell, D.; Erickson, D. Smartphone based health accessory for colorimetricdetection of biomarkers in sweat and saliva. Lab Chip 2013, 13, 3232–3238.

11. De la Fuente, J.B.; Garcia, M.P.; Cueli, J.G.; Cifuentes, D. A new low-cost reader system for ELISAplates based on automated analysis of digital pictures. In Proceedings of the IEEE Instrumentationand Measurement Technology Conference, Sorrento, Italy, 24–27 April 2006; pp. 1792–1794.

12. Velikova, M.; Lucas, P.; Smeets, R.; van Scheltinga, J. Fully-automated interpretation ofbiochemical tests for decision support by smartphones. In Proceedings of the 25th InternationalSymposium on Computer-based Medical Systems, Roma, Italy, 20–22 June 2012; pp. 1–6.

13. DrugCheck VeriCheck. Available online: http://www.drugcheck.com/dc_vericheck.html(accessed on 22 November 2015).

14. Securetec DrugRead. Available online: http://www.securetec.net/en/products/drug-test/drugread-device.html (accessed on 22 November 2015).

15. DrugCheck SalivaScan. Available online: http://www.drugcheck.com/dc_salivascan.html(accessed on 22 November 2015).

16. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF.In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13November 2011; pp. 2564–2571.

17. Rosten, E.; Drummond, T. Machine Learning for High-Speed Corner Detection. In ComputerVision–ECCV 2006; Springer: Berlin, Germany, 2006; pp. 430–443.

18. Rosten, E.; Porter, R.; Drummond, T. Faster and better: A machine learning approach to cornerdetection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119.

19. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary Robust Independent ElementaryFeatures. In Computer Vision–ECCV 2010; Springer: Berlin, Germany, 2010; pp. 778–792.

20. Hamming, R.W. Error detecting and error correcting codes. Bell Syst. Tech. J. 1950, 29, 147–160.21. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with

applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395.22. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Systems

Man Cybern. 1979, 9, 62–66.23. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst.

1989, 2, 303–314.24. Synergent Biochem Inc. Homepage. Available online: http://www.synergentbiochem.com

(accessed on 22 November 2015).

c© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access articledistributed under the terms and conditions of the Creative Commons Attribution license(http://creativecommons.org/licenses/by/4.0/).