Siddiqi Naeem

Intelligent Credit Scoring


Скачать книгу

for a new product or a new market. In some cases, I have obtained ideas for some interesting derived variables from talking to adjudicators. In one example, when dealing with “thin” files, an adjudicator used the difference in days between the date of the first trade opened and the first inquiry as a measure of creditworthiness. The idea was that a creditworthy applicant would be able to get a credit product soon after applying, while a bad credit would take a while and several inquiries before getting money. Internationally, I have found a lot of nuances in lending, as well as uniquely local variables, from country to country simply by talking to bankers. In Western countries for example, the variable “Time at Address” is useful for younger people as they tend to live on their own soon after turning 18 or graduating. However, in other cultures where young people tend to live with their parents, often into middle age, a high number for that variable may not be fully indicative of financial stability for young people. Interviews with local bankers have helped me understand the data better and construct scorecards that would be most valuable to the business user.

      Another good exercise to better understand the data is to spend some time where the data is created. For example, spending time in bank branches, as well as auto and mobile phone dealers, will help understand if and why certain fields are left blank, whether there is any data manipulation going on, and what the tolerance level is for filling out long application forms (relevant when determining the mix of self-declared versus bureau variables to have in the scorecard). This will help gauge the reliability of the data being studied.

      In organizations where manual adjudication is done, or where historically applications have been manually adjudicated, interviewing the adjudicators also helps understand data biases. Manual lending and overriding biases data – understanding the policies and lending guidelines, as well as the personal habits of individual adjudicators – will help understand which variables are biased and how. This is similar to studying an organization’s policy rules to understand how its data is biased; for example, if above 85 percent loan to value (LTV), all decisions are exceptions only, then performance for all accounts with LTV greater than 85 percent will be biased and will appear to be much better than reality.

      The objective here is to tap experience and discover insights that may not be obvious from analyzing data alone. This also helps interpret relationships later on and identify biases to be fixed. As mentioned earlier, superior knowledge of data leads to better scorecards – this exercise enables the analyst to apply business/experience to the data. Application scorecards are usually developed on data that may be more than two years old, and collections staff may be able to identify any trends or changes that need to be incorporated into analyses. This exercise also provides an opportunity to test and validate experience within the organization. For example, I have gone back to adjudicators and shown them data to either validate or challenge some of their experience-based lending ideas.

      Model Validation/Vetting Staff

      Model oversight function has always been an important part of the model development process. Their role has become even more critical with the introduction of banking regulations and model risk management guidelines in most countries. The role of model validation and key responsibilities are detailed in documents such as the Supervisory Letter 11-7 from the Federal Reserve Board (SR11-7)12 and Basel II Working Paper 14.13 Ideally, model validation should have:

      ● A good understanding of the mathematical and statistical principles employed in scorecard development.

      ● In-depth knowledge of corporate model validation policies, all relevant regulations, and the expectations of banking regulation agencies.

      ● Real-life experience in developing risk models and scorecards in financial institutions.

      ● A good understanding of the banking business.

      ● A good understanding of the data within the bank.

      Model validation staff should have regular checkpoints with the model developers and define clearly what is expected in terms of documentation standards. Any divergence from the expected and other red flags should be identified as early as possible.

      The better banks have created an environment where the model development, risk management, and model validation teams work in a collaborative way, each with clearly defined roles and accountabilities. This reduces the chances of delays and “surprises,” as well as deadlocks where no one is willing to make a decision. Banks that have long, dysfunctional scorecard development processes usually have:

      ● Model developers who work in isolation, employ black box processes, and don’t share their knowledge with others.

      ● Risk management business staff who refuse to participate or are not invited to participate in the scorecard development process, nor share knowledge of how they use the scorecards and downstream decisions.

      ● Risk management staff who don’t have even the most basic idea of how scorecards are developed.

      ● People afraid to make decisions because of vague accountabilities.

      ● Model validation staff who have never built scorecards themselves. This is a major problem with many banks worldwide. Model validation staff who ask the wrong questions and treat the development process as an academic exercise enable the production of statistically perfect but ultimately useless models.

      ● Model validation staff with no banking experience.

      ● Vague model validation processes and policies.

      Project Manager

      The project manager is responsible for the overall management of the project, including creation of the project plan and timelines, integration of the development and implementation processes, and management of other project resources. The project manager usually has:

      ● Subject matter expertise in the management of projects.

      ● An in-depth understanding of the relevant corporate areas involved in the project.

      IT/IS Managers

      IT managers are responsible for the management of the various software and hardware products used in the company. They sometimes have added responsibilities for corporate data warehouses. They usually have:

      ● Subject matter expertise in the software and hardware products involved in risk management and risk scoring implementation.

      ● In-depth knowledge of corporate data, data governance policies, and internal procedures to introduce changes to data processing.

      ● Knowledge of processing data from external data providers.

      IT managers can alert scorecard developers to issues related to data collection and coding – particularly when new data is introduced – and to implementation issues related to the software platforms being used to implement scorecards and manipulate data. They must be notified of changes to maintain timelines for implementation. In particular, where scorecards are being developed using complex transformations or calculations, and they need to be implemented on real-time software, the IT department may be able to advise if these calculations are beyond the capabilities of the software. The same is true for derived bureau variables where the derivations have to be done on credit bureau interfaces or using other software. If scorecards are to be implemented within tight timelines, a good idea is to talk to IT to find out how many can be implemented within timelines. This can then drive segmentation strategy, where the number of scorecards to be developed would be consistent with what can be implemented, rather than a larger number.

      Enterprise Risk/Corporate Risk Management Staff (Where Applicable)

      Enterprise risk departments are responsible for the management of both financial and operational risks at a corporate level (as opposed to the product level). They are usually also involved in capital allocation and oversight of the risk function. They usually have:

      ● Subject matter expertise on corporate policies on risk management and risk tolerance levels.

      ● In-depth knowledge of impacts on capital allocation/hedging,