mailto:jo@a-teamgroup.com%20?subject=DMS NYC Sponsorship Opportunities&body=Hello! I'm interested in more information regarding sponsorships for the September 20, 2018 Data Management Summit.

How to Maximise Datasets Created by MiFID II

Blog entry
MiFID II generates about three trillion new data points, begging the question of how financial institutions will maximise their use of new data sources created by the regulation. But how useful is the data six months into MiFID II, what challenges does it present, and will there be winners and losers among firms that can and can’t grasp the data and run with it?
 
The answers to these questions and more were discussed during a recent A-Team Group webinar that was moderated by A-Team editor, Sarah Underwood, and joined by Gaurav Bansal, director at RCloud Consulting and former MiFID II programme manager; Alex Wolcough, director at Appsbroker; and John Mason, Global Head of Regulatory & Market Structure Strategic Response and Propositions at Thomson Reuters.
 
Setting the scene for the webinar, an audience poll asking to what purpose organisations are using datasets created by MiFID II showed some 47% of respondents using the data to develop business opportunities, 38% to identify business opportunities, 34% to gain competitive edge and 28% purely for compliance. A further 28% said they are considering how to use the data.
 
The webinar speakers noted that in their experience firms were moving beyond compliance to consider MiFID II data, particularly pre-trade data, for business purposes, although they also pointed out that these are early days in MiFID II implementation and scepticism remains about the quality of new data and how useful it is today.
 
Indeed, considering all the new data points, reference data fields, ISINs for OTC derivatives, and market data published by new trading venues and reporting mechanisms established by MiFID II, the data management challenges of using newly created data sources and datasets are many and varied. A second audience poll highlighted getting hold of the data and integrating it as the toughest tasks, ahead of poor data quality, poor data consistency and understanding the data.
 
Bansal noted problems of collecting, storing and managing the huge volumes of data generated by MiFID II, as well as reconciliation issues. Mason said challenges in the early days of fundamental change were not surprising and suggested firms struggling to source and manage new datasets could use aggregators such as Thomson Reuters.
 
Wolcough discussed the issues caused by Approved Publication Arrangements (APAs) charging fees at different rates for the data they publish in the 15 minutes before it is supposed to be free. He noted that large firms with deep pockets can afford the date, but small firms may not be able to, a problem that could cause winners and losers in capital financial markets.
 
With the challenges mastered, the speakers discussed how firms could maximise use of MiFID II datasets. Bansal talked about how combining more client and product data with data from trade execution venues could provide a powerful source of information for purposes such as risk modelling and better client outcomes. Mason noted the need to take data out of siloes and integrate it to maximise the potential of analytics across client, product and trade execution data, and link the data to other information such as news to develop more holistic trading strategies.
 
The benefits of MiFID II datasets? Significant for both business and operations according to a final audience poll. With the caveat of improved data quality, the speakers agreed, noting clear operational benefits, improved customer service, and the ability to apply emerging technologies such as robotic process automation and artificial intelligence to the data to achieve greater efficiencies and deliver deeper insight into customer behaviour and market activity.