Top Banner
 waterstechnology.com  June 2012 tradi ng technologiesfor nanc ial-m ark et prof essi onal s Special Report Bi g Data Sponsored by:
16

Big-Data Armanta in Capital Markets

Oct 04, 2015

Download

Documents

swapnil_022

how to apply big data in capital markets
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • waterstechnology.com June 2012

    trading technologies for financial-market professionals

    Special ReportBig Data

    Sponsored by:

  • KNOWLEDGE TO ACT

    thomsonreuters.com

    READY TOOUTPERFORM?

    HOW MUCH DO YOU EXPECT FROM MACHINE-READABLE NEWS? EXPECT MORE. WHETHER YOU ARE RUNNING BLACK BOX STRATEGIES THAT NEED SUB-MILLISECOND DATA ORMANAGING MEDIUM-TO-LONG-TERMINVESTMENTS, THOMSON REUTERS NEWS ANALYTICS ENABLES YOU TO OUTPERFORM THE COMPETITION.Be the first to react to market-moving economic or company events. Analyze thousands of news stories in real time to exploit market inefficiencies or manage event risk. Use statistical output from our leading-edge news analytics to power quant trading strategies across all frequencies and provide additional support to your decision makers.

    With unmatched depth, breadth and speed of news, razor-sharp news analytics and both hosted and on-site deployment options, we have everything you need to gain critical insight. And turn that insight into profit.

    THOMSON REUTERS NEWS ANALYTICS. DISCOVER. DIFFERENTIATE. DEPLOY.

    For more information: [email protected]

    Thomson Reuters 2012. All rights reserved.Thomson Reuters and the Kinesis logo are trademarks of Thomson Reuters. 48003923 001206.

  • S tarting in the mid-1800s with the onset of the California Gold Rush and culmi-nating in the 20th century with the rise and proliferation of large factories and ever-more sophisticated techniques, mining has always been a money-maker. In the second decade of the 21st century, mining is once again big businessdata mining, that is.

    For those organizations that can assimilate, interrogate, and derive meaning from large, unstructured data sets, a fortune awaits. The judicious application of Big Data tools and technologies can go a long way toward addressing rapidly changing regula-tory requirements, while traders can tap into the full potential of social media and other sentiment data, and risk managers can monitor their rms counterparty, asset class and country exposure on an intra-day basis.

    In the nancial services industry, data is king, and taming Big Data, therefore, holds the key to rms controlling large portions of their operating environments.

    But the question remains: Will the capital markets be on the cutting edge of this fast-emerging revolution? When it comes to cloud computing, the adoption of mobile technology, the harnessing of social media data, and the implementation of eld-programmable gate arrays (FPGAs) to super-charge compute-intensive processes, the capital markets has, by and large, lagged other industries in terms of adapting to change. Even in the realm of Big Data, the pharmaceutical industry and the military have been leading the charge.

    But successfully addressing the Big Data challenge offers game-changing potential, which, if fully utilized, can bring about a competitive advantage. Recently, State Street chief scientist David Saul, spoke to Waters about the exciting prospect of attacking Big Data using semantic database technology. He described the technology as cool and exciting stuff that has limitless potential in the nancial services industry.

    With various technologies readily available, now is not the time to sit on the sidelines and wait for the technology to mature and trickle down. Now is the time to be an early adopterthis is where research-and-development dollars should be going. This is the nancial services industrys gold rush.

    Sell Side

    Victor Anderson Editor-in-Chief

    The Industrys Gold Rush

    Editor-in-Chief Victor [email protected]: +44 (0) 20 7316 9090US Editor Anthony [email protected] Staff Writers James [email protected] [email protected] Staff Writers Jake [email protected] Bourgaize [email protected] of Editorial Operations Elina [email protected]

    Contributors Max Bowie, Editor, Inside Market DataMichael Shashoua, Editor, Inside Reference Data

    Global Commercial Director Jo [email protected]: +44 (0) 20 7316 9474US Commercial Manager Bene [email protected] Business Development Manager Melissa [email protected] Business Development Executive Mark [email protected] Marketing Manager Claire [email protected] Lisa [email protected]

    Group Publishing Director Lee HarttChief Executive Tim WellerManaging Director John Barnes

    Incisive Media Head Office32-34 Broadwick Street London W1A 2HG, UK

    Incisive Media US55 Broad Street, 22nd FloorNew York, NY 10004tel: +1 646 736 1888

    Incisive Media Asia20th Floor, Admiralty Center, Tower 218 Harcourt RoadAdmiralty, Hong Kong, SAR China tel: +852 3411 4888fax: +852 3411 4811

    Subscription SalesHussein Shirwa Tel: +44 (0)20 7004 7477Dominic Clifton Tel: +44 (0)20 7968 [email protected]

    Incisive Media Customer ServicesHaymarket House2829 HaymarketLondonSW1Y 4RXTel (UK): +44 0870 240 8859Tel (International): +44 (0)1858 438421

    Waters (ISSN 1068-5863) is published monthly (12 times a year) by Incisive Media. Printed in the UK by Wyndeham Grange, Southwick, West Sussex.

    Incisive Media Investments Limited, 2012. All rights reserved. No part of this publication may be reproduced, stored in or introduced into any retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the copyright owners.

    To receive Waters magazine every month you must subscribe to Buy-Side Technology online, Sell-Side Technology online or one of our multi-brand subscription options. For more information and subscription details, visit waterstechnology.com/subscribe

  • Special Report Big Data

    2 June 2012 waterstechnology.com

    Launched last year, Derwent Capital Markets Absolute Return fund fused capital markets and social media. It based its investment decisions on sentiment analysis from Twitter. After posting decent results for its first month of trading, the fund went quiet, eventually wrapping up its operations for a new direction.

    We couldnt have timed it worse to try and launch a new and innovative fund with the US losing its AAA rating and equity markets in freefall, says Paul Hawtin, CEO and founder of Derwent Capital Markets, which has now rebranded as DCM Capital.

    The fund held around $40 million in seed capital initially. Hedge funds need a certain amount of capital$100 million-plusbefore they can reach critical mass, he says. So, we made the decision to move out of the hedge fund industry, and open up our technology to the mass markets for an online trading platform that combines a research tool embedded within it.

    DCM plans to launch the platform, aimed at retail investors, in late summer. The technology

    powering it has been developed in-house using the financial resources from Derwents ill-fated venture, building on a core of the sentiment analysis tools initially built for the hedge fund.

    Weve spent the last 18 months improving our technologyinitially, we were just focused on global sentiment, but now were able to monitor it on any individual stock, currency or commodity, he says. DCM partnered with IG Group for the project. Client funds will be held by the latter. IG will then push the prices and tradable instruments to the platform.

    DCM says it has developed a nuanced approach to sentiment analysisthe ability of funds to mine the sheer volume of data

    projected by social media, and generate alpha from that.

    Its quite a complex thing, but to simplify it, we listen to the Twitter firehose of data, says Hawtin. We do look at a few others as well, and were looking at growing that, but predomi-nantly its Twitter.

    While the platform will be largely web-based, DCM plans to use HTML5 to roll out iOS applications for Apple devices, with plans for a move to other devices. Other vendors have begun their own inroads into the space. Thomson Reuters recently released its own sentiment indicators, while other major vendors are looking at including sentiment in their market data feeds as an additional layer rather than an executable quality.

    What weve found is that companies have fantastic tools, but you get information overload, Hawtin says. Its great, but what does that mean, and how can you use it to trade? With this, weve spent a lot of time focusing on how to refine it for the end-user to understand and trade off it.

    Twitter Hedge Fund Eyes Rebirth as DCM Capital

    NYSE Technologies, the data and trading technology arm of NYSE Euronext, will unveil a new service by the end of this month, dubbed Market Data Analytics Lab (MDAL), which will provide access to a central, managed database of its historical trade and quote (TAQ) data as well as a range of hosted analytics and tools for querying the data, enabling clients to back-test and implement trading strategies more easily and without the cost of acquiring and managing the entire TAQ database in-house.

    An extension of the exchanges Capital Markets Community cloud-based connec-tivity platform, MDAL allows customers to churn a lot of data within our environment before bringing it into theirs, especially when performing calculations on large volumes of market data within specific time periods, says Todd Watkins, product manager for US cash and data products at NYSE Technologies. And it benefits NYSE Technologies because we can deliver the dataset they need rather than the entire

    database. But we will continue delivering the data via FTP and summary files by email, Watkins explains.

    Clients no longer have to buy and store these different datasets on their own site. Instead, they can download only the subset they need, says Brian Fuller, business development manager for global market data at NYSE Technologies, who adds that the service also includes a library of pre-built, commonly used functions, ranging from simple equations to more sophisticated, moving average-type calculations, which can be created in a simplified version of XML using drop-down menus accessed via a web interface.

    MDAL provides historical TAQ data for NYSE Euronexts US equity markets, and the exchange is looking to expand the service to cover other asset classes traded on markets within its parent group, such as derivatives and bonds, as well as data from other exchanges in the Northeast US, and other away markets, based on client demand, Fuller says.

    Users will be able to upload their own datasets into MDAL in spreadsheets or CSV files, and link them to the exchanges datasets to filter data according to their list of securities.

    Currently, users can buy TAQ content online as an FTP download, but cannot perform the calculations themselves in a managed fashion. With MDAL, not only will clients be able to perform calculations online using hosted datasetsand hence not have to manage the capture and storage of that data onsitebut will also be able to download the resulting calculations and underlying data in a variety of file formats.

    We think we will see a wide range of users, starting with smaller buy-side and quantitative shops, or mid- and back-office staff in larger firms who dont need access to the entire dataset, Fuller says.

    Pricing for MDAL will be a monthly per-user fee, the cost of which will vary according to the number of concurrent accesses, Watkins says.

    NYSE Technologies Bows Hosted TAQ Analytics Lab

    Paul HawtinDCM Capital

  • News

    3waterstechnology.com June 2012

    The Depository Trust & Clearing Corp. (DTCC) is expanding its India business center in Chennai into a technology infrastructure support and development office to help bolster its round-the-clock transaction-processing, funds delivery, and data storage businesses.

    Over the next two years, the DTCC plans to expand its full-time staff in Chennai. In anticipation of this, the office has relocated to a larger site in Chennai where it can poten-tially acquire additional space as it expands.

    The DTCC began working with technol-ogy vendors in Chennai in 2004 and has

    staffed an IT center and vendor oversight function there since 2008.

    Creating a stronger base in India helps us strengthen our presence and IT resources to support regional European and Asia-Pacific business initiatives. The geographic dispersal of our staff also allows us to sustain

    our follow-the-sun workflow model for managing our IT, explains Robert Garrison, DTCC managing director and CIO.

    The decision to expand in Chennai also reflects the DTCCs push to sustain its operations and global data management businesses; provide technology research and development resources; ensure 24-hour business continuity and risk-mitigation support for the DTCC and other securities industry infrastructure organizations that contract with the DTCC for business continuity backup, and manage and support a broader range of IT vendors.

    DTCC Expands Chennai Office to Bolster Operations

    Financial firms are increasing their focus on trying to derive value from analysis of unstruc-tured data, from news to social media sources, though challenges to adoption remain around the trustworthiness and timeliness of non-traditional data sources.

    The first challenge lies in determining how much structure a dataset contains, and whether it can be analyzed by existing tools used for other datasets. There are at least three or four traditional areas of data that were all used to dealing withstructured data, such as ticks and quotes; and semi-structured data, such as news feeds, because they have some structure applied in terms of a headline and fields that one can filter, says Mark Fischer, vice president of product management at CQG. Then theres completely unstructured information, like Twitter or blogs, which have no structure associated with thembut we are finding ways to mine that information, Fischer says.

    Structured data is just like market data. For example, non-farm payrollswe already have that in a structured format before it leaves the lockup, says Rich Brown, head of quantitative and event-driven solutions at Thomson Reuters. Unstructured data is where the opportunities are. With structured data, the opportunity is over in a thousandth of a second. But unstructured data applies for much longer time horizons, and offers the largest opportuni-ties for people to differentiate their strategies.

    Sentiment analysis has been around for a long time, but it is slow and not what people use in terms of high-frequency trading. So what

    people are trying to do now is figure out ifthey can get that any closer to real time, Fischer says. Im a skeptic about performing low- or semi-low-latency sentiment analysis for high-frequency trading. These tools wont be instantane-ous reactions to the marketplace, but will be longer-term and more thoughtful.

    Alexander Abramov, director and corporate relations committee chair for the Information Systems Audit and Control Association, says theres nothing new about the social media rumor mill and how it affects the decision-making process, adding that he hopes the industry will be the beneficiary of new technolo-gies to enable firms to derive greater insight from analyzing unstructured data.

    Steve Ellenberg of MDSL, points to a key problem with basing decisions on data from social sourcesespecially those on which algorithms base trading decisions at sub-millisecond speeds: News feeds are structured and have a certain authority. But theres a very low entry point to some forms of unstructured data and social media, he says.

    These challenges can make it hard for developers to fine-tune their systems to get the most out of the morsels of legitimate value

    hidden in the universe of social media. In terms of social media, there are a lot of engineering problems: The signal-to-noise ratio is very low, and from an engineering perspectivesay, with Twitterwe are working with snippets of text, so there is a lot of research to be done to overcome these limitations, says Ron Papka, global head of client analytics and market data distribution at Citi.

    However, he says that companies are increasingly using these channels to quickly disseminate important news or warnings to a mass audience. Over time, more companies are releasing information over Twitter rather than by traditional means. For example, when Total had a gas leak on its North Sea platformrecently, they didnt issue a press releasethey tweeted it to get the information out there. So over time, this will become structured informa-tion for use in trading and risk management, Papka says.

    Still, pending the development of more sophisticated analysis tools, much of the universe of unstructured datafrom news to tweetswill be used for risk management, to halt trading strategies in the event of unex-pected news. If you look at the high-frequency space, there are uses but it is used more to stop trading, Brown says, though he adds that new tools that can use this information more proactively may not be far away. With news, you can get signals of the volume of news and can use that to build adaptive algorithms that react to news, rather than just stopping a strategy.

    Traders Seek Profits From Unstructured Data

    Steve Ellenberg MDSL

  • Special Report Big Data

    4 June 2012 waterstechnology.com

    IBM is working with investment banks to identify potential uses for its Watson super-computerwhich appeared as a contestant on US game show Jeopardy!as a data and sentiment analytics engine.

    The vendor signed a deal to explore potential uses for Watson at Citi around its retail operations, and is now in discussions with a number of banks around using Watson to support wholesale and investment banking, says Likhit Wagle, global industry leader for banking and financial markets at IBM Global Business Services.

    Banks are facing exponential growth in the volumes of data they need to process and draw information from; internal data that is not necessarily accurate; and a lot of this data is

    not structured, Wagle says, adding that Watson can address these issues through its ability to process vast volumes of data.

    Its an adaptable system that learns through doing, so the more you give it, the better output you getand not just for structured data: Watson can also draw insights from unstructured data, such as news items and blogs, to give analysts more views of data and sentiment, to enhance the quality of their recommendations, he adds.

    Banks could use Watson to obtain a better view of risk associated with specific clients, or

    to analyze large volumes of data to identify drivers of systemic risk. Another potential useespecially in emerging markets where data is not readily availableis around identifying suitable clients for firms wealth management and private banking services.

    Implementations of Watson will be on a bespoke basis. We see Watson working alongside humans to enhance the quality of the advice being provided, and it depends on parameters set by human beings, he says.

    However, he says firms could seek to use Watson as an additional input to the develop-ment and execution of sentiment-based trading strategies, and IBM would look at building a solution that automates some activities, if clients demand it.

    IBM: Using Watson for Analysis Is Elementary

    The International Securities Exchange (ISE) recently released its managed ISE Premium Hosted Database (ISE PhD) of options and equities data and options analytics, devel-oped in partnership with options analytics provider Hanweck Associates, to support traders back-testing and analysis require-ments.

    The ISE began piloting the servicewhich has been in development for almost two yearswith hedge fund, market-maker and bank clients late last year, and is now making it publicly available. ISE PhD provides options tick data from the Options Price Reporting Authority, underlying Level-1 US equities data, and tick-by-tick implied volatilities and greeks calculated by Hanwecks Volera hardware-based options analytics engine, all dating back to 2005.

    The service includes ISEs proprietary open/close prices, which are already available as a separate historical product, and are primarily used by quantitative trading firms and proprietary trading groups to create analytical models and test trading strategies. PhD is certainly a quant offering, so we included that dataset because many quants who already use that data now will also want to use PhD, says Jeff Soule, head of market data at ISE.

    In addition, ISE is providing a database of

    corporate actions as part of PhD, which users can apply to a query, depending on their needsfor example, to see when a Bear Stearns option became a JPMorgan option, to determine the pricing and implied volatility for the option at that specific point in time.

    The exchange is also talking to other, unnamed data providers and exchanges about including their content in the database, such as futures data, which Soule says he expects to add to PhD, and other content, to be driven by customer demand.

    The database supports back-testing, as well as pre- and post-trade analysis and transaction-cost analysis. ISE is also seeing interest from software vendors looking to incorporate the historical data to enhance their existing desktop applications, to support capabilities such as charting, trade idea generation and requests for time and sales data, Soule adds.

    At launch, the database will be updated daily after market close, though ISE plans to add real-time data integration by year-end. Once we add the real-time data, that will expand the prospect base significantly. For example, there are customers that will want to query the database intra-day for trading ideas, Soule says.

    Client systems can access PhD over the

    internet or by cross-connecting to ISEs servers within the exchanges primary datacenter at Equinixs NY4 facility in Secaucus, while traders can use pre-defined queries in PhDs web interface to quickly access the data they needfor example, by simply entering the date range and symbols they want for back-testingor can use APIs to write their own queries for retrieving data.

    Soule says the managed database eliminates a key challenge for firms that may have considered building an infrastructure to store and manage this data themselveskeeping up with the storage capacity and performance requirements of a growing dataset. Were talking about 200 terabytes of data, and theres a big cost factor for somebody to build this infrastructure outnot just an upfront cost but significant ongoing cost, he says.

    ISE is offering a flexible pricing model for the database, to accommodate what it hopes will be a broad range of users. Clients can sign up for annual subscriptions to one-year chunks of content, allowing them to query or download data for the past 12, 24 or 36 months, dating back to 2005. Alternatively, users can pay for one-off queriesfor example, to run analysis on six months worth of data on a specific option that they are thinking of trading.

    ISE, Hanweck Unveil Hosted Tick Database

    Likhit Wagle IBM

  • With growing volumes, velocity and variety of data, it is no longer enough for nancial services rms to limit their analysis to traditional market data. To unlock the real benets of Big Data, one needs to analyze broader sources, such as unstructured data, and combine that information with existing signals to differentiate and enhance trading, investment, and risk models. By Richard Brown

    B ig Data has been a big IT story for many years now, but it is only recently that the concept has caught the attention of the financial serv-ices industry. While market data volumes have skyrocketed in recent years, some might say the data the financial services industry currently looks at is just the tip of the iceberg. The more complex and interesting aspect of Big Data in financial services lies in its variety, however.

    While some businesses deal with more isolated data types that do not necessarily span multiple disciplines, there is a significant range in the vari-ety of data that can have an impact on a firms risk measurements as well as its trading and investment performance. Unstructured data that may impact the market include such types as broker research, industry or economic reports, premium and internet news feeds, blogs, tweets, and audio and video programming.

    When analyzing this vast array of content, one needs to do it in a consistent manner and note key aspects includ-ing the source and motivations of the datawho wrote it, who published it, for what audience and for what purpose, what it is about, and to what extentthe people, companies, places, and so forth; the relevance of the data; the tone in which it is being talked about; how unique/repetitive/popular the story is and any acceleration of trends; the psychological aspects being conveyed; the contextual backdrop; and the potential implications for certain trading/

    investment decisions, to name just a few. Doing this on hundreds, thousands, or even millions of sources can easily overwhelm most systems and cause analysts to quickly become lost in the tsunami of data.

    Thomson Reuters News Analytics ena-bles users to understand these key attributes among a wide variety and massive quantity

    of this unstructured content. It analyzes the data in a consistent, intelligible way to help users quickly unlock the potential in big unstructured data. Whether it be for systematic investment and trading strategies or to deepen a humans comprehension of data, Thomson Reuters News Analytics transforms this qualitative data into structured, quantitative forms to support a variety of analytic use-cases.

    Analyzing the AnalysisOne of the main goals of this process is to understand the implications the informa-tion has to various business processes. When the data has been transformed into a

    digestible format, it is ready for a broader, or more common, analytics process. To do that, it is necessary to understand the holis-tic information value chain. Combining the unstructured data analysis with more traditional sources such as pricing and ref-erence data, parent/subsidiary information, supply-chain dynamics, people/titles/roles

    and products/brand databases, and doing so with an accurate point-in-time per-spective is not easy, but it is required in order to support the downstream uses.

    The outcome of the analytics proc-ess will likely vary depending on who is ultimately consuming the information, but one of the important

    things to consider is that for the most part, the more attributes one has on the data, the more extensible it becomes.

    Thomson Reuters can provide the content, technology, and data management capabilities to properly analyze this wealth of unstructured data, enabling financial services firms to focus on the implications to their investment and trading strategies. Together, we can unlock the value of Big Data.

    Richard Brown is global head of quant and event-driven trading at Thomson Reuters. Visit www.thomsonreuters.com for more information.

    Unlocking the Value in Big Data

    Richard Brown

    Sponsors Statement

    5waterstechnology.com June 2012

    Big Data has been a big IT story for many years now, but it is only recently that the concept has caught the attention of the financial services industry. While market data volumes have skyrocketed in recent years, some might say the data the financial services industry currently looks at is just the tip of the iceberg. The more complex and interesting aspect of big data in financial services lies in its variety, however.

  • Regulatory and competitive pressures, liquidity fragmentation, and increasingly sophisticated trading strategies have led to ballooning data volumes that traditional technologies are no longer equipped to handle. Known as Big Data, these massive data sets must be mined and analyzed to allow capital markets rms to stay abreast with their competitors. Other industries have tackled Big Data, but nancial services rms have been relatively late to the game, and are looking at new technologies to address the challenges.

    ChallengesBIG

    Q How do you define Big Data? Is this a new phe-nomenon, or simply the next phase of enterprise data management with a catchy new name? Louis Lovas, director of solutions, OneMarketData: Big Data can be defined by two salient points. First there is sup-porting hardware. Bigger, faster, parallel hardware architectures have not only enabled greater compute power but also massive growth in storage capacities. This classic Moores Law model has created maximal efficiencies in storage per dollar. Yet hardware has long been subject to commoditization. Practically speaking,

    it is a necessity, but such entropy creates a trajectory that makes hardwares relevance in the Big Data equation equal to that of electricity.

    The advancements in this foundational compute power have paved the way for the true advantage, deriving business benefit through focused Big Data solutions. The ability to tell a story with the data is what elevates a Big Data solution over the underlying commodity hardware and storage architectures. The story is germane to an industry such as finance and creates relevance and monetizes the data for a business.

    6 June 2012 waterstechnology.com

    Special Report Big Data

  • Roundtable

    7waterstechnology.com June 2012

    Peter Chirlian, CEO, Armanta Inc: With competition, new regu-lations and shortened product lifecycles, managers are forced to run a data-driven business instead of simply relying on instinct. Big Data represents the convergence of trends in software and hardware, along with billions in venture capital, which has led to the emergence of new platforms for data management and analysis. Its given businesses the ability to deploy many platforms, each suitable for a class of busi-ness questions. Big Data offers the promise of finally enabling a truly data- and analytics-driven enterprise. In such an enterprise, analytics isnt just a point solution. It is an end-to-end process involving everything from data gathering and cleansing to operationalizing business processesacross the entire spectrum of new Big Data tools and existing data infrastructure.

    Dennis Smith, managing director, BNY Mellon: Big Data is data that has any of the following characteristics: extreme volume, wide variety of data formats, high velocity of record creation, along with variable latencies and the complexity of individual data types within formats. Note that it is about more than just volume. There is a bit of an evolution. Existing technologies have allowed us to perform analysis of historical data. Big Data has the potential to not only provide us better insight into the current situation, but also positions us to be more predictive.

    Andrew Poulter, head of risk analytics and methodology technology, RBS: I think there is certainly a cultural shift in terms of how people think about data, the importance of data retention, and how this can be fed back to improve business processes and ultimately margins. Technically, I see Big Data as an evolution as opposed to a new phenomenon or revolution.

    Marcus Kwan, vice president of product strategy and design, CQG:Big Data is the issue surrounding the massive increases in the number of data sources, volume of the data, the speed, and granularity of the data compounded over history. It has become more relevant in the past few years because of the number of exchanges going electronic, data collection methods, and the rate of technological advances. Big Data has become an issue for financial services due to pressuresboth regulatory and competi-tiveand the need to identify opportunities for profi t. Traders used to make trading decisions plotting charts with pen and paper. The technology and complexity is now light years from that time.

    Ilya Gorelik, founder and CEO, Deltix: In the world of quantitative research and trading, we define Big Data by size (in terabytes), irregularity, and rate of new data arrival. It is one thing to deal with vast quantities of datait is quite another to deal with data arriving at rates measured in millions of messages per second, especially when the data is distributed irregularly over time. Market data volumes have massively increased since the fragmentation of trading venues post-Regulation NMS and the Markets in Financial Instruments Directive (Mifid), and the increasing adoption of technology, allowing trading firms to increase the number of orders being sent to trading venues, so we regard 2007 as the start of Big Data.

    Rich Brown, global head, quantitative and event-driven solutions, Thomson Reuters: The volume, velocity and variety of data that characterize Big Data is unprecedented and while the popularity of Big Data as the industrys latest catchphrase continues to reach new heights each day, its implications cannot be ignored. Traditional enterprise data management challenges are dwarfed by the scale and scope of problems particularly surround-ing the variety of data. Single asset-class pricing data and cross-asset depth-of-book are nothing compared to challenges in analyzing unstructured data such as news, social media, audio and video.

    Q What are the specific business applications for Big Data across the buy side and sell side? Which business processes are most affected by the continued growth of data volumes, in addition to its complexity and variety of sources? Kwan: We look at the market data realm of Big Data in the framework of two pillars: collection and distribution, and analysis and execution. The business first has to be clear on what its strategy is and then choose solutions for these two pillars that fit. If you choose collection and distribution systems before, or misaligned to, strategy, then its simply an expensive science experiment. For example, within collection and distribution, firms must decide whether to go for direct connections to exchanges or source data from an aggregator. The deciding issues are around how fast you need the data versus the cost of maintaining a direct connection, infrastructure to collect and house the data, and so forth.

    Peter ChirlianChief Executive OfcerArmanta, Inc.Tell: +1 973 326 9600Web: [email protected]

    Big Data offers the promise of finally enabling a truly data- and analytics-driven enterprise. In such an enterprise, analytics isnt just a point solution. It is an end-to-end process involving everything from data gathering and cleansing to operationalizing business processesacross the entire spectrum of new Big Data tools and existing data infrastructure. Peter Chirlian, Armanta

  • 8 June 2012 waterstechnology.com

    Special Report Big Data

    In the pillar of analytics and execution, we see a more important shift. Firms need teams who not only can understand the nuances of the data, but can formulate the right big-picture questions. Though these people may be rooted in mathematics and quantitative analysis, the outputs need to be a system that provides decision makers, who may not be as technically versed, the ability to participate effectively. Advanced visualization tools need to be able to mash up the multiple sources and the complex analysis, and sum it up in such a way that a business person can grow it and make an intelligent, well-informed decision.

    Gorelik: We see three main applications. On the buy side, research into alpha-generation is key. This involves access to granular (Level-1 or market-depth) data, and the means to do quantitative research on this data. The second applica-tion is in modeling execution quality. We are often asked why an alpha model with an apparently high Sharpe Ratio in back-testing does not perform well in live trading. There are, of course, several reasons why this might be the case. One is order execution. The smaller the profit potential in each trade, the more susceptible the model is to execution costs, especially slippage. The effective modeling of, and subsequent improvement in execution costs, is achieved by simulation using market-depth data. Thirdly, there are some firms that are using Big Data sets for doing original alpha-generation research. Twitter inevitably appears in such discussions, but less prosaically, quantitative researchers are doing serious research combining market data with machine-readable news, stock-loan and broader economic data.

    Chirlian: Risk management, a critical Big Data application, has historically been limited by technology and use-cases. Before the financial crisis, static risk reports based on independent silos of data were deemed sufficient. This is now not the case. In the past few years, the volume of data and the complexity of the calculations surrounding risk have grown significantly. Existing systems can no longer provide the needed resultsboth for regulatory and business management purposes. The demand for dynamic, real-time risk measurement

    often outpaces existing technology capabilitiestasks like liquidity management require complex analysis across vast numbers of existing systems. There is now a sea change both in the way financial institu-tions look at risk and the technology platforms available to enable this change.

    Poulter: The specific business application for which RBS is using Big Data is to support internal model method (IMM) default risk capital calculations. The calculations require thousands of Monte Carlo paths of market data, representing the future evolution of market data. Apache Hadoop is used to hold the evolved market data and low-level results of interest to the business.

    Smith: Three come to mind. The first two are pretty common, while the third will probably become more common: performing batch operations on a massive amount of data, often as a front-end to exist-ing tools, such as data warehouse appliances; analyzing large amounts of varied data to predict tendencies or future outcomes; and processing rapidly changing data such as that now associated with complex event processing (CEP) systems.

    Brown: Firms are challenged from the front office through the back office and IT departments with issues ranging from database management, hardware and software upgrades, and network management, to data sourc-ing, permissioning and reporting requirements. Firms are turning to Thomson Reuters for our managed services offerings, looking to offload basic non-proprietary functions so they can focus on the higher value-added activities like better managing risk or finding alpha in this vast array of Big Data.

    Lovas: Andy Palmer, co-founder of Vertica, once wrote: Big Data is useless unless you architect your systems to support the questions that end-users are going to ask.

    Business is not aiming for a do-it-yourself Big Data solution, nor do they want to be pioneers with a vendor. Competitive pressures demand fit-for-purpose solutions. Quant researchers look to combine differing data sets to unleash new discovery faster. Vendors that can deliver an analytical and data management platform fit-for-purpose for risk management, price discovery, and fraud management will hit the mark.

    Q Why has the financial services industry seen such significant growth in data volumes, and how has this growth impacted firms ability to efficiently manage large data volumes? Gorelik: Reg NMS in the US and Mifid in Europe resulted in fragmentation of trading, giving rise to more sources of market data. Cheaper hardware and software platforms have made

    Marcus KwanVice President, Product Strategy and DesignCQGTel: +1 720 904 2933 Email: [email protected]: www.cqg.com, news.cqg.com

    We look at the market data realm of Big Data in the framework of two pillars: collection and distribution, and analysis and execution. The business first has to be clear on what its strategy is and then choose solutions for these two pillars that fit. If you choose collection and distribution systems before, or misaligned to, strategy, then its simply an expensive science experiment. Marcus Kwan, CQG

  • 9waterstechnology.com June 2012

    Roundtable

    high-frequency trading now normal practice for many trading firms, which increases the volume of market data. There are very few tools commercially available that are able to use these large data sets for meaningful analysis. Some firms have been strug-gling simply to store data let alone create value from it through analytical research.

    Brown: The explosion of market data volumes and venues, the increase in the number and types of traded instruments, and the interconnectedness of global markets are dramatically increasing the complexity and cost of capturing, normalizing, processing, storing and adjusting these vast volumes of disparate data. Legacy systems and networks are no longer adequate. Single databases are not easily able to handle the various types of data or scale large enough or fast enough to enable users to react quickly to this informa-tion. Financial services firms are struggling to keep up with the changes, especially in this economic environment where its no longer easy to just throw money at the problembuy more hardware, hire more peoplein order to make the problem go away.

    Smith: In many ways, the data has always been there, but we could not cost-eff ectively do anything with it. Additionally, with the need to become more competitive, organizations realize that there could be benefi ts to including more and diff erent data types into the mix.

    Kwan: The growth of the data has been exponential. Firms used to trade across a small/finite set of instruments. Even with the most robust set of analytics applied against them, no problem. There has been rapid expansion of the electronic markets, multiplied by the speed and granularity of the data per instru-mentnow in microsecond ticks. Factor that with the wealth of internal performance/risk metrics that firms are collecting, and then with advanced analytics across all of that data. Though computing speeds continue to increase, this complexity can bring any system to its knees.

    And it comes back to being able to articulate company strategy clearly. Firms can easily get overwhelmed by the tide of Big Data, but keeping the strategy clearly in the forefront will enable firms to effectively wrestle with the challenge.

    Poulter: I think financial services has always had the ability to gener-ate far more data than was possible and realistic to storefor example, price histories, transaction-level risk data, and so on. Big Data has made it possible to store more of what is currently generated, to enable more detailed drill down, trend analysis over time.

    Lovas: Looking at US listed options, the Options Price Reporting Authoritys daily peak reached 14 million messages in 2011, an increase of 131 percent over the previous year. This resulted in total message volume growing 78 percent. The scale of the options market is quintessentially Big Data. A number of factors have contributed to this exponential growth. Venues such as the Chicago Board Options Exchanges C2 Options Exchange and new products including Weekly Options and Volatility Index-based products have increased trading volumes in strikes and underliers. This proliferation has put options on the forefront as a strategic investment tool. The result has been an explosive growth in message traffic. The information flow is

    a flooda tsunamiof market data. On a human scale, you cannot consume or make sense of whats inside that tidal wave without fit-for-purpose Big Data solutions.

    Chirlian: The financial services industry has always been an information business. So it makes sense that firms with the most information and the best and fastest analytics are at a significant advantage. This competitive factor has consistently driven

    financial services firms to gather as much data as they can access. But it has also strained even the largest datacenters. Firms continue to look for better and more cost-effective solutions for dealing with growtha data solution alone, however, is not enough. They also must apply sophisticated analytic capabilities across this vastly expanded datascape, which has put additional stress on their infrastructure.

    Q What are the technology and operational challenges that need to be considered when dealing with Big Data? What technologies are available to firms looking to address this Big Data challenge? Poulter: Data recovery and regeneration options need to be fully considered, with any business-impacting outage understood. Challenges exist in training staff, across the development and support teams, and ensuring the correct infrastructural support is available. Specialist consultancies are being used for training, support and consultancy around the implementation itself. Due to the technol-ogy being relatively nascent, there are few experts across the whole community.

    It is one thing to deal with vast quantities of datait is quite another to deal with data arriving at rates measured in millions of messages per second, especially when the data is distributed irregularly over time. Market data volumes have massively increased since the fragmentation of trading venues post Reg NMS and Mifid, and the increasing adoption of technology, allowing trading firms to increase the number of orders being sent to trading venues. Ilya Gorelik, Deltix

    Ilya GorelikFounder and CEODeltix Inc.Tel: +1 617 273 2540Email: [email protected]: www.deltixlab.com

  • 10 June 2012 waterstechnology.com

    Lovas: Big Data is messy. Market data comes in many shapes, sizes and encodings. It continually changes and requires cor-rections and an occasional tweak. Discovering new alpha and optimizing existing strategies demands a confidence in the result-ing derived analytics. Big Data solutions must manage the vagaries of data sources and complex order-book structures, map ticker symbols across a global universe of exchanges and geographies, and accurately reflect pricing through cancelations, corrections, corporation actions and symbol changes. These are challenging financial-data management obstacles beyond the scope of ordinary storage architectures or file systems. Content-aware solutions leveraging the best of high-performance, scalable compute power are uniquely tuned to fulfill the demanding needs of quantitative analysts and algo traders.

    Brown: In financial markets today, Big Data offers many types of content, both structured and unstruc-tured, that need to be collected, analyzed and stored. Management of these types of data has been a challenge for capital market firms in general and includes issues like tighter budgets, a skills shortageboth with new technologies as well as new data analysis techniqueslegacy systems inability to scale, an increased number of competitors who may be more nimble, and the need to keep up with regulatory requirements. The solutions to some of these problems have been known for a while and can be summed up as follows: Shared-nothing, highly distributed database architectures. Consistency is very hard to fulfill in large datasets. Relaxmost

    problems can be solved with eventual consistency. Dont insist on normalizatione.g., hierarchical data sets dont

    normalize well. Functional programming frameworks are better at solving most

    parallel distributed problems. Data outages can be handled by maintaining enough replicas.

    Technologies such as Cassandra, Hadoop, and MapReduce give firms the ability to massively parallel-process data using functional programming constructs, store huge datasets in both distributed-memory and direct attached storage, and a declarative interface that is not limited by SQLs reliance on relational algebra.

    Kwan: The technology is evolving rapidly. While the days of propri-etary formats are giving way to application programming interfaces (APIs) and more flexible formats, many firms arent prepared to make the switch. Legacy systems are so entrenched into processes, that even the thought of replacement is too painful. The strategy, benefits and return on investment has to be clear before commitment.

    Chirlian: Big Data allows enterprises to deploy a variety of platforms, each targeting a certain analytic challenge. Businesses therefore must think through the kinds of analytic applications they want to build and then tailor their technology. Typically, Big Data platforms are incremental to existing data silos in the form of relational databases and file systems already on the enterprise. A critical goal is to provide analysts and business users with access to all of this data, across silos, for on-demand analytics. From an IT perspective, the question is how to build an integrated architecture that allows the business to view all the data in the enterprise and beyond, retrieve the data as needed, and analyze it at high performance and across any scale. This may be achieved by bringing together multiple independent solutions for each

    layer of the architecture and integrating them. Alternatively, financial services institutions could use an integrated platform such as Armanta, which is purpose-built to enable this end-to-end analytic process for business applications.

    Gorelik: Firstly, record-ing market data is akin to drinking from a firehose, so this is the first challenge.

    There are only a handful of vendor products on the market that can do this. Secondly, once you store this data, you do not want to be physically moving it far because of the sheer size. Thus, you need to be able to use it in situ. Today, that typically means leaving it in or near the datacenter where it was first collected or where there are ticker plants located. Secondly, in terms of processing, a metric more relevant than size is the number of data points, or messages. Because market data is measured in hundreds of thousands or millions of messages per second, then any processing needs be able to process at a similar rate. Finally, there are challenges related to the normalization, cleansing and filtering of data which often require multiple sets of complex analytical transformations. All these challenges dictate solutions involving a built-for-purpose time-series data warehouse, event-processing and mathematical libraries, all capable of processing data at hundreds of thousands of messages per second.

    Smith: There are many challenges, one is that these technologies are not out of the box and technical skills in these areas are not plentiful. It also changes some of our current thinking in data management, security, and compliance. There are numerous associated technologies. I recently spoke to a group about the various Hadoop projects and sub-projects. I identified at least 20. In just the areas of modeling/

    Special Report Big Data

    Louis LovasDirector of Solutions OneMarketDataTel: +1 201 710 5977Email: [email protected]: www.onetick.com

    Big Data is messy. Market data comes in many shapes, sizes and encodings. It continually changes and requires corrections and an occasional tweak. Discovering new alpha and optimizing existing strategies demands a confidence in the resulting derived analytics. Big Data solutions must manage the vagaries of data sources and complex order-book structures. Louis Lovas, OneMarketData

  • Roundtable

    11waterstechnology.com June 2012

    development and storage/data management, there are three tech-nologies associated with each: MapReduce, Pig, and Mahout with modeling/development; and Hadoop Distributed File System, HBase, and Cassandra with storage/data management.

    Q Are most firms approaching Big Data management through a rip-and-replace strategy, or are they layering it on top of their existing infrastructure? Chirlian: An interesting thing about Big Data technology is that it is extending enterprise infrastructure versus replacing it by adding new fit-for-purpose data management and processing tools. The challenge today is that infrastructure management has become increasingly complicatedboth from an IT point of view as well as for business users who must learn new tools. Enterprises must develop a way to package this collection of tools and deliver the benefit of the new technologies, enabling users to perform inte-grated, end-to-end analytics.

    Smith: Big Data technologies are complementary to our legacy products. Most firms are vetting use-cases and incorporating this key tool set into the overall solution set.

    Kwan: This is the perpetual enterprise question, and it depends on which part of the Big Data youre talking about. Weve seen a new generation of tools that do a better job in both realms. With the sophistication of APIs and aggregation tools we see that the layering strategy can work for many situations. Rip-and-replace has its place when maintenance costs can be saved.

    Gorelik: The underlying ability to process vast quantities of market data is achievable only through a few products designed for purpose. As such, we see mostly replacement strategies.

    Poulter: For the current implementation, Hadoop is being introduced alongside traditional relational database data stores. Reporting is done across both data stores, summary results are stored in the database, with detailed drill down functionality being provided using Hadoop. Summary results are held in this way to mitigate any risks with data retention for the newer technology.

    Lovas: Big Data is a big deal to customers so theyre not making infrastructure decisions causally. In the end, well see different firms employ different modelsrip-and-replace and layering. The strat-egy will weigh in numerous factors, with cost being an important aspect. Firms have to analyze the hardware cost and maintenance of insourcing versus outsourcing, and whether its clustered storage or centralized storage, then compare it against existing architectures, factoring in possible salvage. There never is an easy answer.

    Brown: In many cases, the interdependencies of various systems would make it impractical to rip and replace it all at once. We see most new initiatives being brought up in isolated environments and legacy systems or data moving to those new technologies after the new systems have gone through the typical teething pains. Once develop-ment and support staff are comfortable with the solutions, the pace at which the older systems can be retired dramatically increases and firms are able to reap some of the promised rewards of the project.

    Q Are there existing technologiescloud computing, for examplethat can be deployed in a complementary fashion alongside specific Big Data technologies to help alleviate the Big Data burden? Brown: Cloud computing offers great promise for firms needing to dynamically flex their processing needs, especially at peak times

    such as market open and close, without having to pay up for the idle system time. That capacity can be balanced against other users needs, particularly in more public clouds, but financial firms are still reluctant to place proprietary data or processes in the public cloud. Instead, they are increasingly building private clouds behind their firewalls to exploit the computational advantages, reschedul-ing batch jobs when possible to balance workloads, and reducing the overall system footprint. This flexible computing environment also enables

    firms to deal with sudden data bursts, like the Flash Crash, which require very rapid and extensive analysis in order to adjust their models to respond appropriately the next time it sees such an event, or even at the next market open.

    Smith: With cloud computing, absolutely. The flexible and scalable characteristics of cloud computing make it the ideal, underlying infrastructure layer on which the Big Data storage/data management and modeling/development layers lie.

    Gorelik: Cloud computing is not only a natural companion to Big Data, but in the case of serious quantitative research, it is an enabler, and in some cases, essential. The ability to distribute complex calculations across multiple nodes in the cloud at an

    Richard BrownGlobal Head, Quantitative and Event-Driven SolutionsThomson ReutersPhone: +1 646 223 7796Email: [email protected]: www.thomsonreuters.com

    We see a number of areas that are not fully appreciated when embarking on Big Data projects, particularly in the analysis of unstructured data.Often times, we see clients attempt to analyze text believing they can have control over the secret sauce.While the motivations are understandable, it is a very difficult proposition on which to successfully execute and one can conjure up the phrase, kids, dont try this at home. Richard Brown, Thomson Reuters

  • 12 June 2012 waterstechnology.com

    affordable and variable price is a major differentiator of cloud computing architecture. In addition, in many cases, it is simply impractical to transfer significant volumes of historical market data electronically. Rather than physi-cally ship hard drives, cloud computing services deployed in the datacenters where data is available, allows analysis to be done in situ.

    Poulter: Yes, we are using Hadoop and DataSynapse in tandem, with the data being held physically on the same

    machines as the grid engines. Hadoop is, in effect, replacing the role for which Coherence has been used traditionally in enter-prise environments, providing access to cached data.

    Lovas: Big Datas complementary technology is real-time analysis through the use of complex event processing. These two technologies define a solutions paradigm for quantitative market analysis covering quantitative trading, research and transaction-cost analysis (TCA). The ideal case is to view historical activity and real-time as a single time continuum. What happened yes-terday, last week or 10 years ago is simply as extension of what is occurring today and what may occur in the future. Quants look to compare current market volumes to historic volumes, prices and volatility in the hunt for alpha and to control trade costs.

    Chirlian: Absolutely. On the deployment side, cloud computing alleviates some Big Data concerns. Customers may now deploy Big Data solutions on-premises, on physical or virtual resources, or in the cloud. There are also innovations on the infrastructure side that allow customers to tackle specific business problems. However, these advances also push complexity to the user and require, as we said earlier, a packaging of the technology so users can realize the true value of Big Data.

    Kwan: These existing technologies have to mesh with the strategy and the appropriate timeliness of the data. The cloud provides greater access, transparency, and ultimately speed, to certain types of data. Market data for decision-making on a desktop or in an algo system many times has to be a direct connection.

    Q What do most firms tend to overlook when embark-ing on Big Data projects? Gorelik: We find that the focus on storing data sometimes results in insufficient emphasis being placed on the use of data. Regulatory requirements aside, storing data is only useful if sub-sequent analysis yields information that is valuable. Such analysis is computationally and mathematically complex and demanding. Having tools to define and test trading ideas quickly, and then refine ideas, is a major competitive advantage in a trading firm. By focusing on the logistical headaches of collecting data, an emphasis on analytical tools is often overlooked.

    Lovas: Big Data ultimately defines the end game, that Holy Grail for profitability. Firms should not lose sight of that. The Big Data dump and the solutions to manage and analyze it are the fuel that drives the engine of the trade life cycle. That includes the profitability profile of new models, optimizing existing models, re-balancing portfolios and managing the fluid nature of transaction costs. They all depend on Big Data solutions to provide accurate, clean data across a firms tradable markets. Firms need a clear understanding that Big Data is pervasive across the engine of the trading business to ensure success.

    Kwan: Strategy, Strategy, Strategy. Have you really figured out how to make trading decisions off social media and tweets? Maybe. Historically, successful firms have been able to find patterns in the market. I believe the game is the same, but the data set is much more complex. Firms have to be able to adapt to new methods of pattern finding. A big part of that is infrastructure, of which a very important piece is a new class of advanced visualization tools.

    Brown: We see a number of areas that are not fully appreciated when embarking on Big Data projects, particularly in the analysis of unstructured data. Often times, we see clients attempt to analyze text believing they can have control over the secret sauce. While the motivations are understandable, it is a very difficult proposition on which to successfully execute and one can conjure up the phrase, kids, dont try this at home. Challenges range from the difficulty of a portfolio manager to vet a qualified linguistic analyst team, to not knowing what you have/dont have, until its failed/finished.

    We see a lot of false starts and abandoned projects in this space and in some cases it is due to unsuccessful language processing, and in others, the firms have trouble getting those techniques into produc-tion with the necessary fault-tolerant, fully resilient infrastructures to handle such information in the speed needed for financial markets. We believe the right system should be flexible enough to enable users to do what they want, but robust enough for them to simply want to focus on the higher value-added activities like interpreting the analytics for their investment or trading strategies. When it comes to unstructured/text analysis, Thomson Reuters News Analytics offers a great mix at both ends of the spectrum and everywhere in between.

    Smith: I mentioned a few of these before: skills, data management, and so forth. Looking at the data management layer, Big Data might cause the thinking to go from a physical orientation to a logical orien-tation, where things are relative for just a period of time. Additionally, it might change thinking in the data quality area, from things needing to be 100 percent accurate to being directionally correct. This also highlights the complementary nature of the technology where it could front-end traditional tools.

    Chirlian: Big Data is revolutionizing the way business is conducted. It isnt enough that enterprises invest in new technologies for managing and analyzing data. They must now be able to arm their business users with easy, anytime access to the data they want and enable them to analyze this data interactively. The analytics-driven businesses of the future are those that understand this end-to-end analytic process and put in place a well-integrated technology solution empowering the businesses to be confident in their decisions.

    Special Report Big Data

    Dennis Smith BNY Mellon

  • Complete access for yourentire organisation toWatersTechnology

    Our information solutions provide everyone in yourorganisation with access to the best informationresource on the nancial market technology industry.

    WatersTechnology delivers:

    Breaking news and in-depth analysis on the nancialmarket technology industry

    Detailed features on your market and the deals andpeople who shape it

    Video features and technical papers A fully-searchable archive of content fromWatersTechnology and from all of the other market-leading titles incorporated under the site (Buy-SideTechnology, Sell-Side Technology, Inside MarketData, Inside Reference Data and Waters)

    Full compatibility with any mobile device

    To find out more about the benefits an information solutions packagewould bring, email [email protected] or call+44 (0) 20 7484 9933 / +1 646 736 1850

    waterstechnology.com

  • waterstechnology.com June 2012

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages false /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 120 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.00000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages false /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 120 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects true /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > > /FormElements true /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MarksOffset 6 /MarksWeight 0.250000 /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PageMarksFile /RomanDefault /PreserveEditing true /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice