Workshop on Knowledge Representation and Information Management for Financial Risk Management

When

July 21-22, 2010

Where

The Waterview Conference Center in Arlington, VA. (The Waterview is located in Rosslyn, just across the Potomac River from Washington, DC's Georgetown neighborhood. Instructions for accessing the free wi-fi network at the Waterview will be distributed at the workshop.)

Background

The theory and practice of good data management in the financial services industry is no longer simply a workaday operational issue. It is also a risk issue with systemic implications. The credit crisis of 2008 and the ensuing Great Recession have shone a light into the hitherto esoteric world of investment data processing. The lack of consensus or acceptable best practices around standards, agreed upon definitions, procedures, metrics, and mathematical techniques have left supervisory agencies unable to ingest market information in either a timely manner that would permit a macro-prudential response, or to even determine what information might be missing. This has resulted in the following unsatisfactory situation:
  • Corporate managers are uncertain of the trustworthiness of their internal risk and accounting numbers.
  • The academic community is lacking the information required to examine and analyze actual market operations and behavior.
  • Regulators, analysts, and the financial press are denied an understanding of capital market operations sufficient to forge knowledgeable and prudent financial policy.
In response, the underlying theories and definitions of risk management and its related components (credit, liquidity, market, and operational risk management, etc.) are facing a broad re-examination. Key to this re-examination is the growing realization that the ability of capital markets systems to generate data has outstripped market participants’ and supervisors’ capacity to organize or understand it. Data volumes are extravagant. For example, in many cases, traditional SQL transaction processing implementations can no longer scale to accommodate required, mission-critical data, and new technologies for enforcing data integrity are therefore required. There is no definitive list of what information is produced by which actors, when, in what format, and how the various pieces and entities within the financial information supply chain are related. Absent such a compilation and its accompanying explanations and definitions, it is difficult to understand what can be done to enable meaningful prudential supervision.
At the level of the individual financial firm, these events have exposed serious limitations in the ability of financial operations to cope with market realities. For example, the amount of underlying documentation for all of the loans underlying a single CDO-squared (a particular type of structured mortgage derivative) can be on the order of a billion pages of legal fine print. Systems to support diligent analysis of this level of detail do not exist. Similarly, traditional accounting methodologies record and report a single “fair value” for financial contracts. Derivative contracts, however, frequently have asymmetric, state-contingent cash flows, so that a scalar present-value measure of value is not an adequate summary.
Financial risk management provides a rapidly expanding set of tools, such as stress testing and Monte Carlo analysis, for forward-looking inspection and assessment of portfolio cash flows, to identify payoff distributions, basis risks, and state contingencies. Despite the availability of these tools, the infrastructure is lacking for managing the integrity and semantics of the data that feed these tools. The lack of accurate and consistent data that complies with an agreed- upon definition and semantics poses a significant operational risk. As the scale of the problem continues to grow, the gap between what is needed and what is available has becomes more severe. While there are a number of projects underway to address practical implementation issues (for example, XBRL, ISO 20022, etc.), such efforts only skirt the deeper theoretical issues of formal semantics, logics, languages, and computer models for this burgeoning field.
These facts also have implications for systemic risk. A broad consensus is emerging regarding the components of systemic risk. Paraphrasing a recent speech by Jean Claude Trichet of the European Central Bank, this consensus comprises of at least the following three highlights:
  • The degree of financial interconnectedness of firms: an understanding of the obligations and exposures to one another of the nodes in the financial network is crucial to our ability to measure and contain the risk of financial contagion.
  • The extent and diversity of financial activity across the system: mitigation of systemic risk requires that we be able to measure – before a crisis emerges – accumulating imbalances of financial exposures. Similarly, we must be able to measure in a timely and reliable fashion the presence of unsustainable growth rates of financial obligations in particular market sectors.
  • The presence of large contingent obligations for market participants: the current mark- to-market or accounting “fair value” of a position is a single summary measure. By construction, it is inadequate to reveal fully the nature of contingent future exposures in complex portfolios. Instead, detailed stress tests that explore the full range of possible future contingencies are required.
Each of these three vital areas will require that firms and regulators have access to data and information that is greater in scope, quantity, and reliability than what is available today. Assembling the information to support early warning of systemic hazards will require a combination of both granular data on individual exposures, and aggregated information on the endogenous accumulation and unraveling of large-scale imbalances in the financial system.

Possible Workshop Topics

  • Knowledge representation frameworks (ontologies, schemas, models, formal logics) for describing complex financial instruments.
  • Languages (operators and rules) for specifying constraints, mappings, and policies governing these instruments.
  • Model management for financial dataset schemas, parameters and constraints.
  • Rule languages for framing and checking rules on individual instruments, for example, to guarantee that counterparty obligations are well defined in all corners of the state space.
  • Languages and methodologies for establishing the reliable (to defined tolerances) domain of approximation for financial valuation and risk-management models.
  • Analytical models and data exchange protocols to support risk management.
  • Tools and methods for large-scale simulation.
  • Operational risk and process engineering for the flow of mission-critical financial data.
  • Query languages and data architectures to support high-volume financial data sets, and large-scale computation.