Operating model for Core Banking centre of excellence

The life cycle of the CBS platform consists of two distinct phases – implementation and post-implementation. The operating model of Core Banking CoE needs to adapt to the unique challenges at each stage of the life cycle of the platform

IMPLEMENTATION
Implementation of a core banking solution is undoubtedly a transformational programme for any bank. No off-the-shelf core banking solution can perfectly meet the requirements of any bank without the need for customization. Hence the CBS implementation necessitates a reconsideration of the operating model and processes of the bank to seamlessly leverage the benefits of automation and best practices from each CBS module.

During the implementation stage, banks are faced with challenges such as:

  • How can the bank prioritize various requirements from multiple business units?
  • How does the bank ensure that customer service, product innovation and market competitiveness are not adversely affected during the transition to a core banking system?
  • How can the bank ensure that the core banking platform is sized appropriately for the present requirements while being scalable for the future?
  • What is the mid-term and long-term strategy for specialist skills in core banking (outsource/insource)?

Best practices in operating model design for Core Banking system implementation

The Operating Model of a Core Banking CoE should adhere to the following best practices : –

1. CXO level sponsorship: CBS Implementation is a strategic IT transformation programme undertaken by the bank. Hence the Governance Forums should have adequate representation at the CEO/ CXO level.

2. Roles and responsibilities: Roles and responsibilities of each stakeholder should be clearly defined by the programme team and signed off by the governance forums. Typically a project charter is circulated among team members to ensure alignment of roles

3. Ring-fencing of implementation team: Part-time allocation of resources into the programme could result in a lack of commitment and ownership. Hence a substantial number of resources should be dedicated exclusively to the implementation programme, and the success of the core banking implementation should be included as a key performance indicator for them

4. Target business model design: Before documentation of detailed requirements, the functional team should prepare the baseline and target operating models of the bank. This will help align the requirements to business strategy and help in understanding the level of change in each component of the operating model

5. Prioritisation of functionalities: The programme team should get agreement from business sponsors on a prioritisation framework across various types of requirements (regulatory, operational, business intelligence, etc.) to avoid conflicting priorities during the implementation

6. Coverage across IT landscape: The implementation team should have adequate representation to subject matter experts from surrounding systems of CBS (EAI, web channels, mobile channels, and data warehouse). This will ensure seamless integration of functionalities across the system landscape

7. Data and insights: The real value of CBS implementation is in the quality of data and information managed on the platform. The functional team should do a detailed data modelling across the business value chain and identify opportunities for improvement of business insights resulting from the CBS implementation
Product roadmap: The implementation team should assess the product roadmap prepared by the CBS product vendor and ensure that the bank’s requirements are considered in future releases

POST-IMPLEMENTATION
Even after the implementation of the core banking system, the Centre of Excellence receives frequent requests for additional functionalities and support services.

The agility of a bank can be severely constrained if the CBS team is not equipped with the right operating model, skill sets and associated processes to meet challenges in question such as:

  • How can the support services be managed effectively with respect to controls, processes, escalation, etc.?
  • How can the bank reduce operational costs (e.g. through economies of scale, increased stability, reduced maintenance, standardized solutions)?
  • What architectural controls need to be defined for driving convergence and coherence of functionalities?
  • How can the bank influence the core banking product vendor to ensure that the product roadmap continues to meet the strategic requirements of the bank?

Best practices in operating model design for post-implementation.

In the post-implementation stage of the Core Banking System, the operating model of the CBS CoE should consider the following best practices:

1. Lessons learned: CBS implementation leaves behind a rich trail of lessons learned and improvement opportunities. These should be recorded meticulously by the implementation team and handed over to the support

2. Team retention: To ensure that tacit learnings from the implementation phase are effectively utilised in the support phase, the bank should ensure that a significant proportion (20 per cent – 40 per cent) of the team involved in implementation are retained during the support phase

3. Transition management: For various technical and commercial reasons, the vendors involved in support services may be different from the vendors involved in implementation. It is crucial to design a seamless transition between vendors across the two stages

4. Release planning: Future releases should be discussed with business upfront to ensure alignment with the business plans for go-to-market and innovation

5. Global service delivery: Support services during the post-implementation stage are typically delivered through the global delivery model to improve effectiveness and optimise costs. Hence the sourcing and vendor management team should have expertise in procuring global services

6. Cost Allocation: While the initial implementation is usually funded by a bank-wide budget, future customisations may serve only specific business units. An account management team should monitor and track these requirements and effectively partner with each business unit

CONCLUSION
The design considerations outlined above will vary depending on various factors such as size, complexity, geographical spread and level of technology adoption. A systematic assessment of the IT organisation, application portfolio and transformation objectives will help in designing an effective CoE.

As published in the CPI Financial

Defining and adopting an IT scorecard

When it comes to typical bank IT issues, there are five broad themes: Driving strategic relevance of IT and enablement of superior customer experience; Improving return on IT spend, and enhancing its effectiveness; Continued improvement for next-generation banking; Aligning IT applications and technology infrastructure with an overall vision; Enhancing and developing the skills and potential of the IT organisation.

But a bank’s IT strategy is only as good as its implementation and adoption, a case of designing a good IT roadmap aligned to the strategy, but also a direct result of how disciplined the bank is in implementing it in its day-to-day functions. And that is where the Balanced Scorecard (BSC) helps.

This can articulate strategy across both financial and non-financial (including customer, internal process and organisational) objectives, and facilitates in driving its execution through an effective system of measurement. Ultimately, what gets measured is what gets managed.

Should the bank have a scorecard defined at a corporate level, it becomes even more effective to have a cascaded IT scorecard, although this is neither mandatory or a pre-requisite. Laid out here are four simple steps, the approach to defining and adopting an IT Scorecard for the bank, to help drive a best-in-class IT strategy.

V RAMKUMAR, DISCUSSES DRIVING BEST-IN-CLASS STRATEGY VIA AN IT BALANCED SCORECARD

1. Articulate your objectives:
The starting point of a well-driven IT strategy is in having it articulated accurately. The crux of this definition process is in having objectives grouped across the four perspectives of the Balanced Scorecard:

How should customers (internal and external) perceive the services rendered, addressing their expectations and aligned to the overall IT vision?; What is the value proposition of the IT function to the enterprise, from a financial perspective?; How should the processes be driven across planning, innovation, operational excellence, and how do we improve it?; How should the IT organisation learn and improve? What technology and MIS framework would be required?

There are typically three to five objectives defined for each perspective, and collectively no more than 20 -25 objectives for the overall IT function. Their linkage results in a well-articulated strategy map, that defines the cascaded impact of each area over the other.

2. Define measures and set targets:
What cannot be measured, cannot be managed. It, therefore, becomes imperative that the strategic objectives defined for the IT enterprise are defined by specific measures, both practical and appropriate. These could be lead measures that help in taking preemptive steps or lag measures that help determine the impact.

Mere measurement alone would not be sufficient unless set against the context of a target that needs to be achieved – which could be conservative, realistic, aggressive or in, select situations, aspirational or breakthrough.

Having set measures and reporting on the basis of a structured scorecard helps as a communication tool – where the impact on business is reported on a tangible metric. It also helps as a management tool, as it looks inward to the IT organisation to manage and monitor IT performance.

3. Align strategic projects:
So much of the CIO’s time gets sucked into managing projects, that the job almost dilutes into a PMO function in most banks. Hardly surprising, given that 100-odd projects of varied size and scale are running at any given point in time.

The Balanced Scorecard helps in prioritizing initiatives. If a project is not directly impacting any of the stated strategic objectives, then it is not worth pursuing! Conversely, any objective would need to be supported by an initiative or a project, as its vehicle towards completion. Just as the BSC objectives are measured, it would also be important to set Quality, Time and Cost (QTC) measures and key milestones for the IT initiatives to monitor progress. The IT BSC helps one do just that.

More importantly, this alignment also brings in the discipline of differentiating the means from its end. The projects are only a means to a larger strategic goal. Just as a Lean-Six Sigma Business Process Re-engineering exercise is not an objective, but an initiative that will help address a financial or process objective, a core banking change is also the means to driving a contemporary technology platform in a bank.

4. Measuring individual metrics – key for enterprise performance
The scorecard can remain a theoretical exercise unless it is made accountable across the appropriate owners of respective objectives and initiatives. Even better, when the Individual Performance Measurement (IPM) is completely aligned to that of the Enterprise Performance Measures (EPM) as measured by the IT Balanced Scorecard. This tends to positively influence the values and behaviour required of successful IT organizations.

The key success factors also include ensuring ‘singularity’ of ownership (collective ownership results in no ownership) and in alignment with the objective-initiative ownership.

Forward-looking banks also tend to leverage the BSC as a tool to measure the performance of vendors and create a reward system built around the demonstrated performance of the four perspectives. Typical measures tend to be around transparency in pricing (Financial), Investment in the relationship with the bank (Customer), Quality of Delivery (Process) and Consistency and quality of resources (Learning & Growth).

The scorecard also helps drive large transformation programmes and/or mergers and acquisition programmes, where the integration metrics are set against well-defined measures across Quality, Timelines and Monetary budgets. The Balanced Scorecard can also drive partnership and service agreements with suppliers, service providers and business functions.

However, it can only be as effective as it is made to be. Unless performance measures are reported on a monthly basis, and a review is forced by the CIO with his/her direct reports, and the performance (or lack thereof) is explicitly acknowledged, the incentive to perform rapidly diminishes. Remember, people, respond well to what is ‘inspected’, much better than to what is ‘expected’.

WHEN IT COMES TO TYPICAL BANK IT ISSUES, THERE ARE FIVE BROAD THEMES:

  1. DRIVING STRATEGIC RELEVANCE OF IT AND ENABLEMENT OF SUPERIOR CUSTOMER EXPERIENCE
  2. IMPROVING RETURN ON IT SPEND, AND ENHANCING ITS EFFECTIVENESS
  3. CONTINUED IMPROVEMENT FOR NEXT-GENERATION BANKING
  4. ALIGNING IT APPLICATIONS AND TECHNOLOGY INFRASTRUCTURE WITH AN OVERALL VISION
  5. ENHANCING AND DEVELOPING THE SKILLS AND POTENTIAL OF THE IT ORGANISATION

5 success factors for good program management

The role of a Program Manager is more often than not, quite understated. A good program manager breathes through the myriads of the activities and lives the process day in and out: just as it takes a good director to get the most of his artists, resources and reading the pulse of his audience in the making of a good film, it takes a good program manager to manage expectations and leverage his team the best to deliver a successful engagement.

Having successfully implemented multiple core banking and business transformation engagements of various sizes and kinds, and also has earned quite a few experiences along the way over the last 17 years, I have attempted to identify the 5 most critical success factors that are common in any well-managed project in this article. These are not meant to be exhaustive – but certainly quintessential for any successful project.

While the attempt is to focus on IT projects, these are quite generic and apply to any large transformational projects undertaken.

DEFINE YOUR GOAL POST, SET EXPECTATIONS UPFRONT
No journey can be completed in time unless the destination is defined upfront! While this seems an obvious piece of rhetoric, ironically it is for this reason why most projects fail.

Too many stakeholders in a project can do no better than too many cooks of a broth, and if there is no common quantification of what is the ultimate objective to be achieved, even genuine successes go unrecognized

Stakeholders need to have a common vision on what is being set out to be achieved, what are the benefits and why is such a project critical for the organization. The conclusion of one activity may mean the commencement of another activity, and the measurement of progress has to be holistic.

Going live, for instance, in an IT project is only but a means of empowering business and not an end in itself. It’s critical to draw this distinction way upfront and through the project.

“GOOD MANAGEMENT IS ABOUT 90% PLANNING AND 10% EXECUTION. AS THEY SAY, IF YOU FAIL TO PLAN, YOU ARE PLANNING TO FAIL. A PLAN TO BE IMPLEMENTABLE HAS TO BE BOTH COMPREHENSIVE, AND COMPREHENDIBLE”

GET YOUR PLAN RIGHT: BOTH MACRO AND MICRO
Good management is about 90% planning and 10% execution. As they say, if you fail to plan, you are planning to fail! While planning is so much of common sense, where do things go wrong?

In most situations where a plan is said to exist, one sees the plan run into thousands of lines, but still, the big-picture is missing. It’s quite an arduous task for any steering committee to decipher what’s going on unless you have a one-page definition of the project. It is quite surprising, yet so true that projects get lost so much in the detail, that the woods almost invariably get missed for the trees.

The other extreme situation is also common, where planning stops at a ten thousand feet level, and the details of inter-dependencies, pre-requisites, and critical path impact are left undefined. The devil, as we all know, is in detail! A plan to be successfully implementable has to be both comprehensive, and comprehendible.

But planning does not stop at defining the activities and their timelines. The key milestones in the course of the project where the stakeholders need to pause reflect-validate if the progress is as per plan have to be well defined and the impact of an activity on another, should be well established.

“SUCCESSFUL PROGRAMS INVEST A FAIR DEGREE OF TIME AND EFFORT TO COMMUNICATE THE PROJECT BENEFITS TO ALL STAKEHOLDERS, AND HOW IT FAR OUTWEIGHS THE PAIN VALUE RELATED TO THE CHANGE IF ANY”

The other extreme situation is also common, where planning stops at a ten thousand feet level, and the details of inter-dependencies, pre-requisites, and critical path impact are left undefined. The devil, as we all know, is in detail! A plan to be successfully implementable has to be both comprehensive, and comprehendible.

But planning does not stop at defining the activities and their timelines. The key milestones in the course of the project where the stakeholders need to pause reflect-validate if the progress is as per plan have to be well defined and the impact of an activity on another, should be well established.

DETERMINE METHODOLOGY, RESOURCES AND PREREQUISITES
Objectives and timelines are best achieved only when there is a perfect alignment to the methodology adopted and resources deployed. Defining the approach is both an art and a science – one has to draw a delicate balance between timeline and perfection

Let’s take an example. While we all know testing is a key activity in any core banking implementation project, what typically tends to get overlooked is the alignment of the timelines and quality guidelines with that of the methodology & resources deployed.

Each kind of testing has a different context – System integration tests, user acceptance tests, performance tests, operational acceptance tests and so on. The skill-set required, use of outsourced or specialized team, tools adopted, the scope of test-bed, defect density tolerance levels, entry and exit criteria to measure efficacy – all of these go hand-in-hand with the timelines assigned for this activity. Unless a fine balance between perfection in quality assurance and adherence to project timelines is established, it’s so easy to go lop-sided here!

Having clarity on methodology and resource requirements also helps link the pre-requisites of each activity to their availability and plan dynamically for any deviations as they happen. An alternative approach could quickly be adopted only if there is absolute clarity on what needs to be achieved and by when. And that does help!

COMMUNICATE TO ALL STAKEHOLDERS: RIGHT FREQUENCY AND RIGHT INFORMATION
The simplest thing to do, and yet the most commonly missed activity in large projects is ensuring adequate communication. And it’s still the easiest issue to fix!

In every transformational project, there are 3 kinds of stakeholders: Decision makers, end-users or consumers of the project, and the project team that executes. The project team also includes the larger community of vendors and service providers. The key to success is in ensuring all measured progress uniformly

The disconnect begins when people in each group do not see the same picture as the other, or worse still when people within each group are not in alignment. And that is quite likely to happen if the source of information is not the same, quite common when projects are large and multi-faceted.

“EVERY TRANSFORMATION – BE IT BUSINESS OR TECHNOLOGY, BRINGS WITH IT AN ELEMENT OF CHANGE. THE DEGREE OF CHANGE MAY VARY, BUT THERE IS NO RUNNING AWAY FROM THE NEED TO MANAGE CHANGE”

The fix in these situations is quite simple. Firstly, keep the status update from a single source, and broadcast at a consistent frequency. Secondly, it has to be holistic so that all aspects of the project are covered, but still short and to the point, and refer to the same plan that everyone is looking at.

Lastly, create and encourage multiple forums for people to communicate. The more you have a free format of communication driven with a common point of reference, the higher the degree of project comfort with stakeholders.

EMBRACE CHANGE, AND CELEBRATE SUCCESSES: BOTH SMALL & BIG
Every transformation – be it business or technology, brings with it an element of change. The degree of change may vary, but there is no running away from the need to manage change. Even if everything about the project has gone well – in terms of its activities, timelines, deliverables and costs, if the change it comes along with is not welcome and not embraced willingly by stakeholders, the net result is a wasted expenditure.

Successful programs invest a fair degree of time and effort to communicate the project benefits to all stakeholders, and how it far outweighs the pain value related to the change if any. Periodic and consistent celebration of project successes – at every key milestone – is key to drive home this message.

Of course, risks of various degrees – both internal and external – come to play in any large transformation, but the chances of managing them well are much higher if one gets the above 5 key success factors right. At the end of the day, if you got the definition of what is to be done, when, how and by whom correctly, it would certainly be quite hard to get it wrong from there!

Selecting a technology partner – key to a successful RFP process

Across the globe, banks and financial institutions tend to focus much of their energy on selecting multiple service or solution providers, and these selection processes are governed largely by two important principles: the degree of detail required in the solution/service that is being procured and the timeframe in which the selection needs to be accomplished.

It is not uncommon for institutions to attach a high degree of sensitivity to one or other of the above, not necessarily both, at a given point of time – which is reflective of the context in which the procurement is carried out

Selection Methods and Relevance of the RFP

Invariably, the bank’s choice of vendors tends to fall into one of the following four models, based on both the size and scale of the investment, degree of detail envisaged to be adopted, time available for the selection, and established relationships. The four models are:

  • Traditional: Vendors are shortlisted through a traditional approach of a request for information (RFI), followed by a request for proposal (RFP), and then the final list is compiled.
  • Proof of concept (PoC): While generally the RFI process is skipped, the focus here is to have the RFP along with detailed business requirements/specifications, followed by the selection process.
  • Market leader: When there is a limited timeframe to select a solution and the vendor is an established market leader, then banks tend to apply the market leader method, which is considered to be a low-risk model.
  • Strategic partner: If the bank or the institution has an established relationship with the solution/service provider, or has a subsidiary unit that provides the service required, then the approach is to go with a strategic partner. This is also a quick selection approach.

Source: Cedar Management Consulting

As one can see in Figure 1, the more traditional approach and the PoC approach have a higher degree of detail attached to the selection process, and the timeframe is also longer. This is inevitable, given that the vendor’s selection process also requires receiving and reviewing responses to the RFI/RFP that is floated by the bank.

The other two approaches are relatively straightforward, although the risk/return quotient is clearly a function of how well you know the market (and its leader) and also the degree of confidence attached to the strategic partner, wherever such a player exists.

Nuances of the RFP

An RFI, by definition, seeks information that describes the vendor or the service provider in more detail, in order to better understand who’s who and thereby determine what should be included in the RFP. On the other hand, when it comes to an RFP, the focus is less on the vendor’s services and more on the bank’s needs and the ability of the vendor to meet those expectations.

The RFP contents typically include the following:

  • Introduction to the bank or the institution floating the RFP.
  • About the RFP – its objective, expectations and expected outcome.
  • Terms of the RFP – the process, timelines, responsibilities, do’s and don’ts.
  • Formats and templates that need to be used in the submission, including exhibits and annexes.
  • Most importantly, identify the requirements that the service provider needs to address.

These principles are applicable in any of the commonly used procurement activities adopted by banks – while most of these may pertain to that of technology vendors/solutions, it is also commonly used in the procurement in many other areas, including outsourcing partners, administrative procurements, etc.

While each of the above elements is critical in its own right, it is important to review some of the nuances of these elements, in the context of four dos and five don’ts for banks to consider while processing an RFP and administering a selection process.

Four Do’s

  • Keep the objective clear and focused: The clearer the focus of the RFP, the more accurate the selection is likely to be. Clarity of the RFP also comes from the clarity of what is expected from the service or solution provider. Most selections that have not resulted in the right choice of the vendor have been thanks to a lack of clarity in the RFP.
  • Communicate the RFP process, and terms: Timely completion of the selection process is directly dependent on the completeness of the RFP response of the vendors, and ensuring conformance is far more effective when the RFP process and terms including what is to be submitted, who is to be contacted, when is the submission to be made and other timelines, where are the evaluation to be processed, how will the process move ahead, etc.
  • Insist on adherence to templates: Effective comparison of the vendor submissions requires standardisation of responses, and templates serve a huge purpose here. The most effective RFPs are those that have clarified submission formats at the beginning and insist on their adherence. It also helps in making an effective comparison of the vendors and avoid hidden costs, and also enable a standardized approach to the evaluation.
  • Ensure well-defined requirements: It is always advisable to provide maximum clarity to the solution provider on requirements – including as much detail as possible, with relevant addendums if required. Not only does it ensure that the expectations are well defined, but also helps the service provider estimate the effort, timelines and costs much more effectively

Five Don’ts

  • Don’t pass on the ambiguity: It is not unusual to notice RFPs using phrases like “including but not limited to…”, and generously using “etc” in its list of requirements – these instances only smack of ambiguity and absence of clarity. Not only do vendors find it difficult to offer an appropriate response, but it also ends up affecting the cost structures proposed, which in turn negatively affects the bank’s experience.
  • Size doesn’t matter: It is not imperative that an RFP is a 300-page document. It is more important that the RFP exactly communicates what is required and what is expected of the vendor. The bigger the RFP, the more the effort in responding to it, and the lower the quality is likely to be.
  • Discourage disparate and informal communications: Perhaps one of the most important areas of attention in any RFP process is to strictly maintain formality in the communication with vendors, and ensuring ‘uniformity’ of communication to all participating respondents. Not only does this ensure the process is tight, it also ensures you get the best results out of the process.
  • Avoid short-cuts: The very reason why any bank would choose an RFP approach – over a market leader approach for example – is because it needs a higher degree of detail involved in the selection process, and is willing to incorporate the additional time involved. To adopt short-cuts in the middle of the process not only erodes all the merits of the RFP process but also may result in a sub-optimal vendor selection.
  • Do not underestimate timelines: Having an aggressive timeline for a RFP submission can be as damaging to the RFP process as not communicating the timeline expectations upfront to the service provider. There have been many instances where the right service provider has been left out just because the vendor could not submit the response in time to the bank

Maximizing Returns on the RFP Process Investment

One of the most important aspects about investing in an RFP (and procurement) process in general, is that the investment has a high recurrence value, and can be significantly leveraged in multiple future procurement activities that involve either or both of the two situations when:

  • Similar products/services need to be procured
  • The same service/solution providers are required.

However, it is important to note that unless some of the steps involved in the above, as well as the documentation that is processed through the selection process (including the RFP, the responses received, the evaluation papers and the vendor’s clarifications), are archived, it becomes difficult to correlate unexpected situations that arise down the line with the activities that were performed during the course of the RFP evaluation process. This is possibly why they say that managing an RFP process is both a science and an art.

For this reason, it is sometimes useful to consider having third party consultants manage this. However, whenever the bank or financial institution is looking to do the process internally, the above will be useful in ensuring maximum mileage from the invested time, effort and costs.

Emerging global trends in Banking Architecture

Key trends in Banking Technology keeping pace with the winds of change what are the priorities for CIOs and CTOs these days? What are the technologies to invest in today and what are the ones to keep an eye on for the future?

The changing trends in the banking technology space are best reflected in discussion topics raised by CIOs and CTOs of pretty much any bank. Ten years ago, a typical conversation would be largely focused on the core banking and back-office systems that needed to be overhauled. Today, there has been a considerable shift towards service-oriented architecture (SOA) and leveraging outsourcing models to drive cost structures down.

Talk to any technology head in the banking industry today, and it would be impossible to have missed the buzzwords of digital, cloud, analytics, mobility and big data. Obviously, all of these do have an impact on the focus areas and influence both the shape and spend of the application architecture of the bank

When looking at the trends of change, a quick look at what is the typical application architecture of a bank is relevant. Cedar’s approach to architecture design includes eight layers:

  • Channels/interfacing with the customer;
  • Services/front-office;
  • Risk management/middle-office;
  • Core systems/back office;
  • Analytics, reporting and business intelligence (BI);
  • Support layer that helps with the backend functions;
  • External layer (including interbank and payments);
  • Middleware layer that stitches them all together.

So what do these new buzzwords mean to the new banking technology architecture, and where is the impact mostly felt? What does it mean to a bank which has reveled in adopting state-of-the-art technology as these winds of change have hit the shore, and what does it mean to the new age digital banks who are looking to change the way banking is done? And more importantly, how do banks respond to these and effectively embrace the change, with the twin objectives of minimizing disruption and maximizing adoption?

The landscape of change, driven by the advent of digital and its implications for the technology architecture, could perhaps be summarized across four broad trends that are observed, in some shape or form, across global, regional and local banks. Even for those who have not adopted these as yet, the currents are felt to be quite strong that will indeed propel the investments and management focus in the direction of these trends.

CHANNEL TOUCH-POINT: THE NEW AGE CUSTOMER EXPERIENCE
Customer experience is the center stage of any services business. With the advent of mobility and the ‘internet of things’, banks have woken up to the changes around, and the threat posed by fellow players in other verticals – including telecom and retail – to the way in which banking will be done in the days to come.

From the traditional branch banking where new innovations were driven by format and offerings, the investments in channels have significantly moved into the digital world and SOA. Leveraging every digital touch-point of the customer and offering an integrated multi-channel service has emerged to be the key focus areas. This can be observed by the new age innovations in video banking where the customer could speak to his/her branch or relationship manager over a video call.

 

Key trends in Banking Technology keeping pace with the winds of change what are the priorities for CIOs and CTOs these days? What are the technologies to invest in today and what are the ones to keep an eye on for the future?

In the US, Pittsburgh-based Dollar Bank has introduced video tellers, with both drive-through and walk-up options, with the teller able to remotely control the machine and guide customers through most branch transactions. A camera transmits the customer’s image to the teller at the bank’s customer service center in Pittsburgh and that teller’s image is transmitted to the ATM.

Barclays has rolled out the video banking services in the UK for its customers to have a video conversation with their advisors. Emirates NBD became the first bank in the Gulf to introduce video-enabled interactive tellers. It also has an ID scanner and signature pad, allowing customers to withdraw larger amounts of cash than their standard daily ATM limit.

ICICI Bank in India has launched a service for non-resident Indian customers to reach their customer care representatives from anywhere in the world over their smartphone.

Bank Audi in Lebanon has launched the country’s first interactive teller that allows a customer to conduct transactions over a live yet remote teller at an ATM. Odeabank, which is Bank Audi’s subsidiary and the first bank to be granted a licence in Turkey in the last 15 years, is also preparing its digital offering, including the latest in-branch technology, mobile electronic signatures and interactive video.

Another enabler is the tablet, which allows in-branch staff to come out from behind the counters and interact with customers, bringing improved service and the chance to cross-market. Those tablets can control the ATMs, so staff can guide customers through their transactions. Khaleeji Commercial Bank, for example, has recently introduced this in Bahrain, which has been well received by its customers.

The adoption and adaptability of these customer touch-points will eventually evolve across the three stages of presence, interaction and transaction. It is a clear trend that banks are prepared to make heavy investments to have a strong channels framework.

ANALYTICS: MAKING THE MOST OUT OF YOUR DATA
The data that resides with a bank about its customers, while it has always been rich – be it the demographic details or the spend and saving pattern – the fact of the matter is that these have been acquired, stored and processed across multiple systems in the past. The last decade saw banks vying for the consolidation of such disparate pieces of information through initiatives in the space of data warehousing (DW) or customer relationship management (CRM) systems. And the arrival of cloud and big data have only made the volumes of data that a bank now has access to, multiply manifold.

What however makes it all worthwhile, is when such data is appropriately processed to unlock the real potential of determining what’s the next best requirement of the customer that needs to be addressed – and that is where predictive analytics comes into the foray.

Be it determining what is the next best product for a customer, or determining who is likely to attrite, or predicting which customers are best positioned to cross-sell or up-sell, banks are increasingly finding the value of predictive analytics to be immense.

The advent of analytics has also redefined the contours of meaningful BI. From being a post-facto lag indicative reporting, technology investments are more driven to produce executive dashboards that are more ‘lead’ indicative. This is a distinct trend, adopted by smart, savvy banks independent of their size.

PAYMENTS: THE DIGITAL WAVE
Contactless payments, NFC, mobile wallets, mPOS, virtual and cryptocurrencies, bitcoin – the terminology list in a new age banking dictionary seems to grow by the day! And all of this simply means just one thing: the payment model – the way payments are made, is simply not going to be the same.

We are already witnessing a flurry of global activity here. Natwest and RBS are offering Apple Pay to their customers in the UK. Nedbank in South Africa is looking to offer PocketPOS, an EMV-certified mPOS platform catering to its SME customers. Samsung Pay is launching imminently in South Korea with the support of major banks, followed by the US, UK, Spain and China.

This trend is not only observed in the space of retail payments but in the B2B and cross-border payments industry as well, which have seen significant advancement driven by technology.

From a banking technology architecture standpoint, there are implications too. To start with, there would be a need to collaborate with new modes of retail payments such as Apple Pay, and investing in technology that drives straight-through processing (STP) in the area of payments. As banks begin to expose APIs for third parties to embed in their applications, a key implication will be the need for increased investment in cyber security and fraud management, as digital payments also pose a much larger business risk, and not just operations or technical risk.

THE ADOPTION AND ADAPTABILITY OF CUSTOMER TOUCH-POINTS WILL EVENTUALLY EVOLVE ACROSS THE THREE STAGES OF PRESENCE, INTERACTION AND TRANSACTION. BANKS ARE PREPARED TO MAKE HEAVY INVESTMENTS TO HAVE A STRONG CHANNELS FRAMEWORK CORE ENGINE: RETAIN, UPGRADE, REPLACE OR OUTSOURCE?

And here comes the issue of the biggest piece that sits in the middle of any technology stack for a bank: a core banking engine. No matter how big or small the bank is, it is inevitable that the largest investment – and therefore the highest degree of C-suite attention – is focused on the core banking platform and its extensibility to the new age demands.

And here comes the issue of the biggest piece that sits in the middle of any technology stack for a bank: a core banking engine. No matter how big or small the bank is, it is inevitable that the largest investment – and therefore the highest degree of C-suite attention – is focused on the core banking platform and its extensibility to the new age demands.

Part of the challenge here is the willingness to deal with the elephant in the room. Core banking replacements are akin to changing aeroplane wheels at thirty thousand feet and has implications not just from a ‘big ticket’ investment standpoint, but also a serious commitment of management bandwidth.

Now that is easier said than done. This is why we still find banks that are happy running core banking engines of yesteryear and having the technology architecture window-dressed with modern channels and front-end systems. However, this is not sustainable.

The one definite trend that we are looking to witness is the change to the core banking platforms that banks will inevitably need to make. Depending on how old the platform is, and how much it extends to the demands of the new era, the choices that banks may have to vary between upgrading to a later version and having it fully replaced. While these may not be overnight decisions, these are inevitable and hence bound to change – slowly but surely.

So, in conclusion – it would only be fair to say that banks can ill afford to ignore the change that is happening all around – and the implications on the technology architecture are obvious. While there may be differences in approach with regards to the sequence of the change, the speed of overhaul, and the choice of suppliers or solutions, there is no running away from the need of aligning your technology, with what could just be the new age of banking.

Developing a bank’s scorecard

WHILE BANK PERFORMANCES TEND TO GET TYPICALLY MEASURED BY THE GROWTH OF ASSET BOOK OR NET PROFIT, A HOLISTIC ASSESSMENT OF THE BANK’S PERFORMANCE NEEDS TO BE BASED ON MEASURING BOTH FINANCIAL AND NON-FINANCIAL PERFORMANCE – WHICH IS THE TRUE ESSENCE OF A BALANCED SCORECARD.

When banks tend to measure their “enterprise performance,” the discussion generally gets restricted to the financial performance – the asset book or the net profit, and in some cases, the branch network. While these do reflect growth, they are more of a lag indicator of how the bank performed during the last reporting period, and do not reflect the sustainability of the performance on an ongoing basis.

Considering that the ultimate purpose of the strategy is to drive enterprise performance, the balanced scorecard designed for a bank also needs to draw a balance between the financial and non-financial objectives across all four perspectives – financial, customer, process, and learning & growth.

The four key building blocks for an effective balanced scorecard in any industry would typically consist of developing the strategy map, defining the right measures, aligning key initiatives with the objectives, and assigning the right ownership. While the steps to build the scorecard are quite similar across industries, the challenges and nuances that are applicable to a bank can be quite different.

More specifically, the key focus areas that need to be borne in mind while developing a balanced scorecard for a bank, through each of the four building blocks as defined above, could be quite different from other industries and need to be approached appropriately. Let us explore these in greater depth.

STEP 1: BUILDING THE STRATEGY MAP FOR THE BANK
The strategy map, while establishing the top 20–25 objectives across the four perspectives, also has to drive the collective views of a bank – across multiple facets and business divisions. At the same time, the map needs to distinctly reflect the bank’s priorities – both short-term and long-term. The biggest challenge is in building this balance.

1. FINANCIAL: Even while the bank tends to improve on its returns to shareholders, it is imperative to determine if the asset book should grow faster than the margins, or vice versa. A good strategy map always helps define what would be the primary financial objective for the bank, also defined as “F1.” The financial objectives should well define the revenue drivers, cost strategy, asset book strategy, and risk management strategy.

WHILE THE STEPS TO BUILD A SCORECARD ARE SIMILAR ACROSS INDUSTRIES, THE CHALLENGES THAT ARE APPLICABLE TO A BANK CAN BE QUITE DIFFERENT.

2. CUSTOMER: With the increased proliferation of differentiated segments, clarity on the segment needs and defining the proposition for the target segment is important in the strategy map. The value proposition essentially includes the product and service attributes, the image of the bank, and relationship quality. It is critical to recognize the difference in approach for the corporate, commercial, business, and retail banking segments when one goes about defining the customer objectives.

3. PROCESS: From the point of identifying the customers’ needs to the point of having the needs fully satisfied, the process framework of the bank needs to excel in the areas of innovation, operational excellence, and delivery of service quality. Continuous improvement to internal processes serves as the backbone for delivering customer expectations.

Not only should the process perspective focus on the products and services for the customer, but it should also set the right priorities for the channels – an area that is increasingly becoming important, especially when we live in an era of mobility and anywhere, anytime banking!

4. LEARNING & GROWTH: As the perspective that helps define the organization’s objectives toward human competencies, technological infrastructure, and the climate for action, it pretty much serves as the foundation layer, and the organizational philosophy and its priorities. Being a customer service-driven culture could be quite a different ball game than being a process or performance-centric culture. It is not about what is right but about what is the higher priority for the bank at that point of time and making that well defined.

STEP 2: DEFINING THE RIGHT MEASURES
Measurement tends to be the key to the science of management. Having set the top 20–25 objectives that constitute the cockpit view across four perspectives, it becomes imperative to define the measures that best articulate the objectives. Unless the objectives can be measured, they cannot be managed. People respond to what is inspected, not what is expected, and therefore measures drive organizational behaviour and help test the bank’s progress in achieving the strategic objectives.

So, what is different about the measures that need to be defined for a bank? It is all about selecting the right measures that help calibrate between lead and lag factors, and a mix of ratios, monetary values, and survey results. Typical examples of measures applicable in the banking context include the following:

  • Financial perspective: Typical financial measures include ROE, ROA, asset book (corporate, commercial, business banking, retail), fee income, cost of funds, cost efficiency, asset utilization, and NPL%.
  • Customer perspective: The customer-related measures generally include customer acquisition, channel transaction mix, cross-sell percentage, products/customer, customer satisfaction index, and customer attrition. Product features, image, and relationship strength are key measures for a successful customer and product strategy.
  • Process perspective: Since this is reflective of efficiency and productivity, the typical measures include operating cost, process SLA, branch transaction cost, and alternative channel penetration. Compliance is also measured by audit ratings.
  • Learning & growth perspective: Measurement of training days/ employee, percentage of variable compensation, and employee satisfaction drive organizational measures. Technology related measures include key milestones of technological initiatives and quality of MIS.

Tightly linked with the choice of measures is the process of defining the targets. Targets need to be defined as a mix of easy, realistic, and stretch targets.

A select set of measures may be defined to be “break-through” targets, which help in aspirational growth or quality. I know a CEO of a large retail bank, who is famously quoted by his team for walking into the planning session, writing a number on the whiteboard that is way beyond what has been planned by the team, and walking out without uttering a word, but with an aspirational message well delivered. Well, that is a breakthrough target that his team is expected to deliver, no questions asked!

STEP 3: ALIGNING KEY INITIATIVES
While objectives help articulate the destination, the approach to reach them still depends on successfully delivering the right set of initiatives. Unless the right initiatives are defined and prioritized appropriately, it is quite hard to achieve the measures set for each objective, however straightforward they may be.

Initiatives are means to deliver the end, which are the objectives. It is not uncommon to see banks mixing up initiatives with objectives. For instance, a Lean-Six Sigma business process reengineering exercise is not an objective but an initiative that will help address a financial or process objective, just as a core banking platform transformation will help with learning & growth objective, or a rebranding initiative will help address a customer objective.

Interestingly, the process of identifying the initiative and mapping it to the objective has a consistent pattern that we have observed across multiple banks where Cedar has helped in developing the scorecard. Some of the most common initiatives that are important for delivering the strategic objectives are as follows:

1. Financial and cost efficiency: Typical initiatives include new market and branch expansion, fee income improvement, asset mix realignment, risk management, and cost reduction exercises.
2. Products and channels: Market research & assessment, new product development, alternative channel design and migration, building loyalty program, realigning branch locations, fees and charges realignment.
3. Process enhancement: Activity-based costing, Lean Six Sigma process enhancement, process centralization, process and technology outsourcing.]
4. Organization and human capital: Organizational structure design, compensation benchmarking & realignment, ESOP program design and rollout, training and development, employee engagement, performance management system design.
5. Technology: Core banking transformation, key application rollout – CRM, loan origination, treasury, ALM & risk, transaction banking system, data warehousing, business intelligence, HRMS, and enterprise GL.

IT IS ALL ABOUT SELECTING THE RIGHT MEASURES THAT HELP CALIBRATE BETWEEN LEAD & LAG FACTORS, AND A MIX OF RATIOS, MONETARY VALUES, AND SURVEY RESULTS.

STEP 4: ASSIGNING THE RIGHT OWNERSHIP
While this is generally quite a straightforward objective is and how effective the measure is, an objective that is not owned will ultimately be an orphaned objective. Even while there are differences about the owner of the objective, it is important to recognize that the custodian of the “data” that provides the status of the measure need not be the owner of the objective.
This is a subtle but important fact to be kept in mind. There are four simple rules to be borne in mind when objectives are assigned to owners while designing and implementing a scorecard:

1. Define primary ownership: The F1 or the primary financial objective should be the responsibility directly owned by the CEO. If it were a cascaded scorecard of a business unit such as corporate or retail, the unit head would be the owner of the primary financial objective.
2. Ensure singularity of ownership: It is a preferred practice to have ownership of an objective assigned to one owner. Joint ownership with more than one owner should ideally be an exception than being the rule.
3. Align objective-initiative ownership: Where the initiative is directly linked to an objective one-to-one, the ownership of both initiative and objective is best assigned to the same individual.
4. Align enterprise-individual performance measurement: Lastly, the objectives that have direct ownership from an individual are best linked to the individual performance measure (IPM) as well. This also helps to build the linkage between the enterprise and individual performance measurement.

The value of any measurement tool is as good as its usage. The key to the successful leverage of a balanced scorecard lies in making it an integral part of the day-to-day business-as-usual. When the monthly meeting of the CEO is based on the performance as indicated by the measures of the 20–25 strategic objectives, not only does it bring in the effectiveness of focus to the conversation, but it also enables a deeper dive into the nature of the underlying issue that had potentially resulted in the performance or the lack of it.

Aligning initiatives with the objectives and building the ownership, therefore, becomes imperative. After all, what cannot be measured cannot be managed too!

Core banking implementation: changing engines at 30,000 feet

Core banking implementations have a striking resemblance to changing engines while a plane is up in the air, as the context here is not very different. The bank that is implementing the solution in most situations is alive, running, and operational entity, and when the core engine that runs the bank has to be changed, it has to be done with zero disruption to its operations and minimal inconvenience to its customers.

Ask anyone who has recently gone through implementation, and you would hear them totally agree, and have to say, that this is easier said than done!

So what does a typical core banking implementation typically entail? What does it take for a programme to be successful – or at least ensure that the most common mistakes and pitfalls are avoided? Where do implementations tend to go wrong and how does one pre-empt it? While the questions and answers may be quite unending, this article looks to identify the seven phases of a typical core banking transformation, what they are about and what they mean to the project, what happens in each phase, what to look out for and more importantly how to avoid some of the errors that are but too common and sometimes, inviting.

Having successfully executed over 40 large technology transformation programmes, from fast-track eight-month implementations to large end to end four-year programmes, there are true quite a few things that one learns along the way, and this is an attempt to share the key ones.

PROGRAMME PLANNING: THE JOURNEY STARTS
Perhaps the most obvious activity for any large project is the plan. As they say, if you fail to plan, you are planning to fail. That The activities, inter-dependencies and pre-requisites: Even before we get to the timelines, it is important that all activities that govern the core banking programme, and all other aligned activities are defined exhaustively.

For example, if the bank is undertaking a major expansion programme on its channels, it cannot be carried out in isolation without impacting the core banking transformation and vice versa.

Timelines, milestones and critical path definition: A typical core banking programme plan would at least have three to four thousand line items. It’s important that at least ten major milestones and about 20 minor milestones are identified in the core banking programme, as they tend to provide the guiding validation if the programme is proceeding on track. Some of these milestones also get linked to payment and therefore become even more important to be tracked with clear definitions of entry and exit criteria.

A critical path is also drawn from the start to the end of the project, which helps understand the impact on the final go-live date. Also, it helps to begin with the end in mind. For example, it’s important to determine if it’s likely to be a ‘big-bang’ roll-out or a phased roll-out?

Team definition and activity ownership: While the activities themselves have the nature of being either the primary responsibility of the bank or vendor (or third-party service providers), it is also quite useful to ensure each activity gets assigned to a specific owner, who tends to be a part of the core team.

Identifying the key members of the team, and clearly assigned responsibilities is integral to this exercise. More importantly, it is also critical that all key resources required to be mobilized from the supplier’s end are identified with a clear plan of onboarding for the programme.

Communication plan: Any programme plan or charter would be incomplete without having an explicit definition of what is the modus operandi for communication. This is both internal and external. Internal communication is the update of status on a periodic basis to all stakeholders, including the steering committee, and the format of reporting.

External communication includes key milestones and schedules where customers and regulatory communication is to be made. The plan, once finalized, would need to be agreed upon and signed off by all stakeholders.

There have been quite a few instances where there is more than one version of the plan that is being followed, and that could be quite ominous – having a single view of the programme and everyone being aligned to it is very important

CUSTOMISATION: TAMING THE ANIMAL
Even before the right core banking solution supplier is identified, banks would typically have gone through a long process of determining the key gaps in the system that need to be customized, in order for the solution to meet the bank’s requirements. One of the very initial activities of the implementation programme is where the bank team reviews the product in detail, and reviews the above gaps once again, so that they are fully understood by the supplier’s development team, and are rightly reflected in the functional and system requirement specifications document.

It’s important to have this validated and signed off, as this thereafter becomes the key reference document for the product enhancement team, who typically sit offshore where the customizations are carried out. Other than the interfaces which are required for the core banking platform to co-exist with other surviving applications, customizations generally constitute both product-level changes as well as requirements prevalent in the region where the bank operates in. Additionally, there would also be ‘bank specific customizations that are required to align with the way in which the bank operates. Now, this is where the trouble starts.

As long as the changes required in the system are critical – either from a regulatory or regional practice standpoint, they are quite inevitable and need to be accommodated. However, floodgates tend to open up when the bank team tends to go a little overboard and looks to tweak the system to make it do what the bank has been ‘traditionally’ practicing, from a process perspective. This could sometimes get as bizarre as having to buy an Airbus 380 and making it run on the road! This needs to be contained, and nipped in the bud. Remember, the lesser the customizations, the higher the chances of a successful implementation and a smooth sail thereafter.

A simple rule of thumb that helps determine if customization is required would be to use this three-point checklist:

  • Is the absence of this customization likely to violate regulatory compliance norms?
  • Would there be a serious deviation from the local/regional practices without this customization?
  • Is there a very high financial or customer service impact, should this customization not be done?

If the answer to one or more of the above questions is yes, then you should allow for the customizations to happen. Where the answer is no across all three, then it’s an obvious case for dropping. From my own experience, at least 50% or more of the initially identified customization items do not have an affirmative answer for all of the above!

DATA MIGRATION: LOCK, STOCK AND BARREL
Data migration is the single longest track that starts way upfront in the programme and continues till the last migration run, which is also called the final cutover. The whole idea is to ensure that all the data that is required to run the new core banking system gets migrated from the existing sources of data, including the current core banking platform and the aligned ancillary systems. Akin to the shifting of a house, the process of migrating data brings about some striking similarities:

The migration plan is drawn bearing in mind what the ‘end-state requirements are. The key aspect here is to ensure the new platform has what it needs, and the objective is not to migrate everything that is present in the old system just as we align furniture to a new home’s layout, the data that is migrated will need to be enriched or enhanced to ensure requirements are met. There are multiple approaches from having default fill-in values to an end-to-end enrichment programme based on the complexity of data and availability of time.

A series of ‘mock’ migrations are to be conducted to fine-tune the migration logic, validate the migration code and finally, clock the migration time.

It’s important that the final cutover is executed in the span of 24 to 36 hours, and the repeated mock migrations help sharpen the axe for the final cut.

What to watch out for, is the readiness and accuracy of the approach adopted by the team assigned for validating the data migration. Remember, this is the activity that ensures all the details of the customer including his/her account balance and the financials of the bank are being migrated from one system to another.

PARAMETERISATION: THE FINE PRINT
It is always amazing to note, that no matter how many implementations are done with the same product, and no matter how similar two banks are – in terms of geography, size, operating model, regulations etc. – there are always significant variances to the subtle nuances of what products and services they offer, what procedures are adopted and how the financials are recorded and reported.

Well-established global core banking solutions address these differences by way of a very important facet of the implementation, called parameterization. Right from defining the key variables that build the attributes of a product, to the segments of a customer and the aligned fee structure, defining the accounting treatment that is to be followed and the resultant financial reporting, all of the fine prints of the bank get to be captured and defined in the system through this phase.

Ensuring that a core team of people from the bank gets fully familiarized with the parameters and the application is key to ensuring the needs are logically articulated and captured in these parameters. Some of the forward-looking banks also look to leverage this opportunity to develop system aligned process flow documents (PFDs) that help to map the key processes of the bank, reflecting the steps that need to be executed, both within the system and outside the system. The PFD also helps to align with various roles of members within the bank, and serves as a quick reference document both for testing the solution flow and also for training the end-users

TESTING: THE DEFINING MOMENT.
If one lists all the core banking programmes that have failed in the history of technology platform transformations, it is most likely that most of them have been called off during the phase of testing. The ‘testing phase’ as it’s rightly called, is where the bank gets to validate whether the software is ready to be rolled out, and there are three important sub-questions that need confirmation here:

  • Is the product doing all that it is supposed to do – for which the bank had invested in it?
  • Have all the customizations that the bank asked for, been delivered and are the parameterized values working well?
  • Have the data used for testing shown they have been migrated accurately?

The experience of the User Acceptance Test (UAT) is always the lead indicator to what end users are likely to experience once the product is rolled out, and therefore it’s important to ensure this is managed well. Large core banking implementations typically would have at least four to five rounds of UAT before there is a general consensus for the product to be rolled out.

Additionally, a series of System Integration Tests (SIT) are conducted, prior to the UAT, to validate the technical aspects of the system and the interfaces that have been built. There are also specialized third-party testing vendors whose services are leveraged for executing this activity, and practices being adopted around AGILE testing methods as well. The key is to start this activity quite early in the game

In addition to the UAT, there are two other popular tests that banks are looking to conduct, and quite rightly so. The first is performance testing, wherein the speed and performance of the system is validated on the specific hardware, to ensure the response time and end-user experience (and in the case of channel transactions, customer experience) is assured. The second test is penetration testing, which is to validate that there are no soft spots for external access into the system. This is more important, where the system is exposed on the internet.

TRAINING: UNLEARNING AND LEARNING
No matter how good the system is, and no matter how well the product is customized, parameterized and tested, if the training of the product is incomplete or insufficient, then there is every likelihood that the product gets to be ‘disowned’ by the users. And the risk of this is quite large where the users are used to an old platform and the merits/ benefits of the new platform are not sufficiently explained and appreciated. The ‘unlearning’ of the old ways of doing things is equally, if not more important, than the learning of the new platform.

One of the most common errors that many banks tend to do in the training is to limit this to technology or system training in the classroom. This false comfort results in serious challenges after the system goes live, as the users find it difficult to apply this knowledge in the real world.

The training should not only impart the knowledge of the new screens and processes around it, but also ensure the users have actually ‘played with the new system’ in their own environment (and not just in the classroom or CBT). This is also addressed by way of having ‘business simulations’, where all users across the bank are made to actually simulate the life of a normal working day in the bank, by posting the transactions just the way they would after the system went live, and the efficacy of the process and accuracy of the system and its reporting is reassured.

GOING LIVE AND ROLL-OUT: RUBBER HITS THE ROAD
When the big moment does arrive – and before you realize it, there will be a stage where the marathon becomes a little tiring, and you want to get the system rolled out as soon as you can. There will always be two schools of thought, where one believes we need ‘to take the plunge’, while the other says exactly the opposite – ‘it’s too deep to leap now’.

It would therefore not be necessarily the function of people to think or feel about their respective readiness, but leverage a structured framework to measure things holistically. Cedar’s RAPID framework is extensively used to determine if the bank is indeed ready to launch the new platform, and helps to measure across the five parameters – Resources, Application, Processes, Infrastructure and Data.

Needless to say, the above is not meant to be exhaustive, but a more definitive and vital checklist for the launch of the core banking platform. Independent of whether it is a ‘big-bang go-live or a phased roll-out unless you have a green light across all of the above parameters, it would be a little premature to announce the arrival of the newborn.

That being said, the success and efficacy of the new system does not get measured by how smooth the cutover was, but almost always, inevitably, based on how good the experience is after the system went live. And that’s a function, as we discussed before, of how well we could have the users learn the new platform and unlearn the old ways of doing things. After all, the only constant, as they say, is change!

Four key success factors to designing a digital-ready bank

Every digital transformation is looking to address a fundamental shift, both from the customer’s perspective and also with the internal processes and organisational models. What are the elements that differentiate a robust design of a digital-ready bank against others?

With so much being said, written and debated about digital journeys and transformations and the benefits they drive, the question that still deserves an honest answer is this: do all digital transformations programs necessarily result in achieving their intended goals?

There are several global banks that have suffered from or are being challenged by, the growing expectations on digital performance that are simultaneously also constrained by ground realities. When digital banking does not necessarily drive customer satisfaction, more often than not there are a few obvious – but addressable – reasons that are key to look out for.

The estimated spend on digital transformation globally is expected to grow to $2 trillion in the next three years, even if we reckon that only 20% of that is in the financial services space. In all fairness, the digital transformation journey cannot be principally different from any other transformation if one looks at it from the lens of its ultimate objective of driving change, and therefore the success factors that are embedded therein can be quite similar too.

At the same time, there are dimensions that can be quite different from other transformations that need to be borne in mind – an example of this being NAB’s decision to reduce 6000 roles (source: IBS Intelligence, November 2017) as they automate and simplify the business model, even while 2000 new jobs are being created to enable the workforce to deliver the 2020 plans.

Now those kinds of dual hire and fire approaches are somewhat unprecedented, and that’s where digital transformations need to be dealt with differently.

So what are those key questions, whose definitive answers can stand the test of a digital banking transformation? Here are four key success factors that are aligned with the Balanced Scorecard framework that we at Cedar-IBS believe are critical for any CEO if the bank is on a digital transformation journey.

Are customers getting a new, fresh and differentiated experience?
If more than 80% of the mobile penetration is likely to be smartphones in the next three years, and customers are looking to get all their banking at their fingertips (pun intended), then it is not about just offering that service to the customer that makes you the bank of choice, but being the most interesting, differentiated and intuitive service provider that gets the customer’s attention.

Getting the right “Design Thinking” with simplicity at the core of the services model is the key success factor here. For instance, when an app is used to convert your phone into a bar-code reader that automatically charges items to a credit card without having to pay at the retail checkout, we are just not talking about convenience, but an altogether new customer experience that the bank (Barclays, in this case) is looking to offer.

It’s not just about digitizing the customer touchpoints and driving coherence with omnichannel banking, but about everything to do with banking. It is about getting all channels, data, technology, and operations to converge on driving a better customer experience (CX).

Industry estimates peg a 2-3% growth in revenues for every increase in the customer satisfaction decile. CX is a function of ease, speed, transparency and above all, the ‘wow’. And that is where the digital bank can score, as evidenced by a higher NPS with digital-first banks. Value perception driven customer experience is also a key factor here, and it pays to invest in building an image, although it is critical that the delivery framework lives up to the claim.

Is the bank’s operating model redesigned to suit the digital era?

From being product-centric to process and customer-centric, there have been multiple theories that define the central theme of every operating model. However, what would be critical for thriving in a digital era is weaving technology and a digital thought process around everything.

More and more banks adopting their process framework in accordance with “digital customer journey maps” is evidence to this. Product design and development, driving new-age innovation in services, mainstream customer engagement and everything to do with the back-end process will be re-oriented to fit with the digital agenda. It’s no longer about anywhere, anytime, but also about any device, and therein lies the real essence of the transformation.

If Social, Mobile, Analytics and Cloud (SMAC) does not constitute as the primary vocabulary of the strategic roadmap of a bank, chances are that digital is not at the center of gravity, and the bank sees this as another initiative. And if digital is just another initiative in the long list of projects run by the bank, or if it is another business line for the bank, then chances are that the operating model they are currently delivered in is unlikely to last very long. If 80% of urban millennials already believe digital banking is the primary way to a bank, then the default operating model – the design for the end-game – will need to be digital in catering to that audience too.

So what happens to the branches?

Well, just say that they exist to serve the purpose in the new scheme of things. As the customer profile transcends across generations over the next 10 years, and as smartphones grow into being a one-stop-do-all device, and branches become more flagship promoting digital experiences embedded in them, a new paradigm of a digital end-state is likely to emerge in the next few years.

JP Morgan’s focus on building self-service kiosks and card issuing machines in branches, or Wells Fargo building video call technology for customers to speak with its personal bankers are examples of what branch banking will become in the days to come.

“A new paradigm of a digital end-state is likely to emerge in the next few years.”

Have we got a future-ready digital organisation framework?

It may not just be enough to do lip service on having an agile organization. A true future-ready digital organization framework would be about driving that spirit both in thinking and doing. For example, the roles and responsibilities of a manager can be different between an agile and a waterfall development model, and attempting to have both co-exist may paralyze the functioning of the team.

Driving innovation-inclusivity with the team, and having an agile framework that can plug-and-play with multiple outsourced entities is emerging as a dynamic ecosystem that allows for digital innovation promoting both collaboration and co-creation.

An interesting example is Capital One’s initiative to institutionalize design thinking and lean startup learning across the organization, and accelerate the enterprise-wide digital agenda with a stimulating environment for ideation and customer experience enhancement. Banca Intessa, an Italian bank based in Turin has invested in a digital learning process that is based on a Netflix app, driving an active engagement by 100,000 employees. The bank won the Workforce Empowerment and Behaviour award for a digital learning portal.

Time-to-market is key, and any framework that dilutes this proposition is unlikely to sustain. Having long cycles of planning, developing, testing and rollouts are passé. The new-age thinking is about prototype-driven minimum viable products (MVP) and scaling those that pass the smell test.

To get this going, we are looking to have banking business experts, UX designers, IT development team and quality assurance professionals to work in tandem, and having a cross-functional innovation team of a different order than what we had 10 years ago.

The spirit of agile models is in making things faster, dynamic and effective. This also means easier adaptability, real-time interface and promoting online-virtual communities that are complementary and yet not bound by geographic restrictions. More importantly, both enterprise and individual performance measurement frameworks would need to be realigned with changing priorities as well.

Do we know if we are truly generating real value?

A boardroom conversation of a bank is incomplete if it has not expressed its concern with growing micro-loan fintech players or peer-to-peer crowdfunding models, and the advent of robotics and artificial intelligence defining new ways of financial advice. Yet, it is also true that not every board member relates to the value that is in store from a next-generation digital world. A typical passive approach to board approvals on digital transformation are two forth: a) Being relevant: If you’re not on the digital map, you don’t exist; and b) Staying ahead: Not being the first to offer is as good as no offer.

Defining a value proposition is a function of distinguishing between customer segments that are primary today as against the segment that would be centre-stage tomorrow. The hallmark of a digital model is also about differentiating the value drivers for each customer segment based on what is critical for each of them.

This also has another connotation: being digital also comes with the responsibility of safeguarding against the new-age risk factor – cyber security in particular which can create havoc if not pre-empted. Customer data, drivers of relationships and information assets are sources of value, and losing them to intruders can bring any bank to its knees, if not protected vociferously. With regulatory norms also increasing by the day, the cost of compliance can also go up steep in the digital era.

However, the real case for a digital transformation can be and needs to be, much more than this. This is essentially driven by where we visualise the organisation to be over the next three to five years and what that would mean to the shareholders from a value perspective. And what does it take to reach there, including key changes that need to be driven?

Fortunately, most of the changes are driven by what is adaptable from an immediate standpoint. The digital paradigm, as they say, is all about driving long-term vision but with short-term execution. Launching digitally innovative products, driving digital adoption by the customer, improving digital process framework and building a digital-ready organisation are all means to a larger end: is there value created – either with increased business or with reduced costs? Ultimately, the proof of the pudding is always in its eating – and for once, this had better be real and not virtual!

“Stay ahead: not being the first to offer is as good as no offer

Getting your machine learning right

As we enter the age of Siri and Alexa, the emergence of machine learning has a new implication for the financial services industry. So how do we get it right?

Machine learning (ML) in its most simplistic form is when a computer automatically learns and uses that learning to predict the future course of action. The learnings could relate to data across all its forms across the universe of Big Data, to deduce further information, relate to insights and drive the course of action on that basis with predictions and decisions.

While ML and artificial intelligence (AI) find their applications across a variety of sectors ranging from healthcare to hospitality, the impact it has already made in the financial services sector can be particularly related to a few specific applications including fraud deductions, cyber security threat reduction, automated smart loan origination, preemptive collections, virtual assistance and algorithmic portfolio management. In addition to reducing operational costs, ML environments allow for improved accuracy and a better customer experience.

The estimated financial benefits in each use case have generally been positive. For instance, ML is estimated to drive down bad debt provision by more than 35%. Similar estimates also exist for each of the above applications.

While there are multiple use cases that one gets to see every day, from chatbots to optical character recognition (OCR) and sentiment analysis, we will explore the effectiveness of ML in the context of fraud deduction, here. This is about building a correlation with a seemingly unrelated set of data and helping to determine potentially fraudulent activity.

ML allows for identifying and preventing frauds, which otherwise may not have been deducted as a regular fraudulent activity. For example, a self-adaptive ML program would be able to connect the dots that link geo-location data with past transaction behaviour data to alert on potential fraudulent card transactions.

A natural corollary to this is where geolocation details are used to increase customer intimacy and wallet share – an obvious example of this could be an instant discount offer to customers at outlets in the neighbourhood where he/ she is located at that point in time.

The successful deployment of ML is a function of effective training of the system and accurate scoring – understanding the data, creation of models, generating insights and validation of output. This validation, confirming that there are no ‘false positives’ and training the system to detect such occurrences the next time, is an iterative process. This iteration also called the Data Sciences Loop, is key to get the ML right.\

A case in point is where financial transactions are scanned for suspicious behaviour. While the effectiveness of the ML tool deployed is about predicting the fraudulent transactions accurately, it would be equally critical to minimize wrong deductions of legitimate transactions – which would result in excessive interference and customer dissatisfaction.

So how does one get the ML right, and what are the right components of an effective ML platform? We explore the three steps that are critical to getting ML to be truly effective.

Managing Big Data: getting the data model right

As ML platforms tend to be agnostic to data schema and the data sources tend to increase over a period of time, the ability to absorb new feeds of data becomes a key factor. Enterprise-level model development is taking centre stage, and the model outputs are now part of regulatory and business intelligence processes. A key challenge is where the data required by models sits outside the IT governance framework.

Big Data management systems and data reservoirs have emerged to be the name of the game. Given that the data now made available spans across both human and machine generated in structured and unstructured forms – including social media, biometric, legacy and transactional data – having the right framework to clean, transform, store and enrich different types of data with a high-volume distributed file system is the first step for an effective ML. This, then, allows for driving a granular perspective of micro-segments, that allows for driving predictability. The key success factor here is the ability to drive automated ingestion and accurate mapping of data from source systems into a central repository.

“Machine learning also comes with the responsibility of more vigilant data security and governance”

Continuous discovery: building effective ML models

As ML platforms tend to become more sophisticated, the model efficacy becomes a function of continuous improvement to the rules engine and the ML models. Determining misconduct in a trading activity, for example, is a continuous process of corelating discrete Natural Language Processing (NLP)-enabled data across telephonic calls and emails with the underlying trade. NLP allows for machines to decode human language, both written and spoken.

Agile approaches allow for multiple rounds of testing to fine-tune results and improve the workflow, in what one would call as a continuous discovery process, and this forms the bedrock of any effective ML environment. The key here is to minimize external intervention, with a self-learning approach, using what is known to be the ‘Model Sandbox’. New ML models tend to offer a deeper and insightful view of the data, than what was previously possible.

Real-time execution: integrated architecture

The core value proposition of any ML tool is in the real-time processing and execution of the model. A case in point is when millions of cross border transactions are processed for fraud detections, reducing the manual interventions in anti-money laundering validations.

Determining that there is a potential fraud would be of no value, should the corrective action not be executed in real-time. And this involves having an application architecture that is tightly integrated with the ML tools.

An extended example can be made about the use of ML in cyber security executing all the above three steps. Sifting reams of log-scans (data model) and determining hostile threats that need to be early warned (ML model) and executing an immediate act to self-protect the system from a data loss (real-time execution) are all carried out in real time for true impact. The concept of machine-to-machine exchange of information, driven by the Internet-of-Things (IoT) can also deliver its true potential only from an architecture that enables real-time execution.

While the examples discussed above are mostly built around financial fraud detection and innovation in regulatory solutions, the concept extends well across other domains, too. For instance, a well-developed ML platform could effectively conduct screening tests and background checks to validate if the applicant for a HR interview is indeed the person that he/she is claiming to be.

Legal firms are looking to use ML to read and review contracts to identify risks embedded in them. With the emergence of machine readable regulations, we could expect to see a much lower error rate in interpretation that is presently resulting from ambiguity in understanding.

Are there areas to be extra sensitive here? Obviously, there is always another side to the coin. The sophistication of data analytics and ML also comes with the responsibility of more vigilant data security and governance.

Investing in the right information architecture and improvising on it also becomes pertinent as sources of data evolve, and more importantly when machines have begun to learn how to get them effectively applied.

At the end of the day, the question still remains: can machines learn better than humans? The jury is still out there, but it would pay to watch some emerging examples – behavioural biometrics is one of them.

Machines can now go well beyond signature verification, into recognizing voice patterns, accents and even the distance to the mike based on the customer’s holding position of a phone. For sure, the days to come are likely to be more interesting than before.

The dawn of the AI era in lending

With an estimated $5 billion in the global AI market by 2020, unsecured lending is expected to grow by more than 960% in the next four years. Welcome, the new AI era in lending

The days when it took customers several weeks to apply for a loan, and yet run the risk of eventually being turned down by banks, are long gone. The dawn of the artificial intelligence era has now enabled banks to develop quick risk assessment models and instant credit scoring that fast tracks the entire process and also creates a real value differentiator in the marketplace for early adopters.

When access to funding was reported to be reduced from weeks to hours with automated lending driven by Santander, the underlying engine was primarily the AI-based quick risk scoring that enabled immediate risk assessment, speeding up the underwriting process and providing working capital within hours.

Similar use cases are aplenty elsewhere – a case in point being the initiative by Scotiabank that reportedly enables lending to customers who are new to the bank, wherein the borrower’s creditworthiness is assessed using AI, enabling the decision to lend to be made the same day, even without the customer having to walk into the branch. Kabbage was the engine in both the above examples.

The application of AI is not just about quick risk assessment and same-day credits. It has also been a core value proposition in reducing the error rates in document processing and eliminating human error. JP Morgan, for instance, has reportedly adopted AI through its COIN program, which quickens document review and reduces mistakes in loan servicing.

An estimated 12,000 contracts are reviewed, and almost 360,000 hours of work-load are reduced to a few seconds. Franklin American Mortgage has reportedly achieved more than 80% document recognition for its mortgage portfolio, with minimal error rates, with a customized solution it has developed.

An even more interesting application of AI in the world of credit is in the personalization of customer experience. RBC, for example, provides its customers with a recommendation on their best repayment strategy, based on analysis of their financial habits, and also suggests how much they should target to be their monthly contribution. In addition to reducing the loan liability and payback period of the customer, this approach also generates a whole range of new data points that further enable having a targeted customer service offering.

The advent of AI in lending has its share of challenges, too. Essentially, there are three types of issues and challenges that are likely to be pertinent :

1. Privacy issues
With a gamut of sensitive information being processed, customers are at continuous threat of fraud, privacy and data theft. Considered a core issue that has to be grappled with, especially with all the impact created by the Facebook case, banks and customers are equally weary and likely to deal with this issue with more sensitivity in the future.

2. Compatibility
Adopting AI into an existing IT infrastructure would require compatibility, which is perceived by several banks as a hurdle to integrate this into its existing architecture. While this would be a short-term aberration, one would expect this to be overcome quite soon, considering the magnitude of the benefits that AI can bring from a financial standpoint.

3. Human intervention
With the use of chatbots and virtual assistants, more than 40% of mobile interactions are to be with VAs. An estimated 57% of human jobs are likely to be redundant in the next decade, thereby making the new era of jobs to be so new – that they do not exist yet! This would obviously imply realignment of roles, restructuring of credit organizations and re-engineering of credit processes. Obviously, easier said than done.

Crystal gazing: a peek into the future

To debate whether AI is good or bad for the lending industry is quite a mute point. This is the reality and embracing it totally would be a prerequisite for anyone to survive in the new world. And early birds do get the benefits too: be it the identification of new revenue streams with non-traditional borrowers that conventional methods would have rejected, or in eliminating human errors while improving the speed of loan process, banks do stand to gain both on the topline from new customer segments and bottom-line from cost efficiencies.

Needless to say, the customer of course gets to gain the most in this, as not only would this mean faster loans with lesser paper and error-free environment, but also a personalized experience and potentially reduce much through smarter, thriftier payments.

So, what does the future have in store if things were to go in the direction they are already speeding in? Here’s a bit of an indulgence and some crystal-ball gazing, with a particular focus on what could be in store for banks, customers, suppliers and industry players in the lending space from the world of AI.

1. Customer experience makes all the difference
The customer is always king, and will continue to be. Chatbots, virtual assistants and robo advisors will evolve to react with emotional intelligence, insights and customize with language sensitivities. Expect there to be a revolution in customer experience, this could be the new world order that differentiates winners from the rest.

2. Automation – the new paradigm
Business processes would now evolve to a different paradigm. AI, machine learning and natural language would mean new organization structures, new roles, new processes in the world of credit and lending. A paradigm shift, in the real sense.

3. Emergence of data and algorithm economy
Every piece of data – be it numbers, words, action, inaction – all count to be part of the new world of algorithm economy. Scaling the manual management of processes and segregation of data, everything at scale is expected to be managed by the algorithm economy. The efficacy of this algorithm, at the end of the day, would be the secret sauce of the winners at the end of this race.