One of the most daunting challenges the banking industry is facing relates to the incorporation of artificial intelligence (AI) in the face of ever-changing regulations and decades upon decades of “unclean” data. If you’ve spent any time working in the financial industry, chances are you’ve attended seminars regarding the importance of AI and its advanced analytics that take actuarial science to the next level — predicting customer behavior like no regular algorithms or analytics ever could. When we consider investment and banking as an industry, the necessary evil AI forces require operational risk management. How we manage and predict which risks are greater or lesser often determines how successful one becomes in this roller coaster of an industry.
Now, let’s talk about the biggest obstacle when it comes to a company fully rolling AI solutions into their daily workflow to assess and mitigate risk — Data.
The backbone of all modern machine learning, predictive analytics, and artificial intelligence is data. Without data, no matter how advanced and complex AI solutions become, they will never have the information they need to make accurate predictions. Think back to your days in high school. How was it you learned mathematics? Chances are it was from two mediums, your teacher and your textbook. For this comparison to make sense, think of our artificial intelligence or machine learning algorithm as the student — the teacher is the implementer of the program and the textbook is the data. A teacher can only come up with so many examples of their own to provide the student with data sets. Teachers’ true capabilities lie in guiding the development of knowledge as opposed to providing practice problems or homework. The textbook isn’t really meant to provide direct guidelines to the student but rather countless examples of problems and applications the student will be faced with in the future. Thus, even with the best teacher, a student cannot learn without a strong accompanying textbook. In the same fashion, even the best analyst cannot train and create a successful predictive artificial intelligence program without strong accompanying data to use for training.
Herein lies the problem most modern banking and financial companies face: their data is housed in localized legacy systems. In the age of cloud-based computing and storage, we now face a new problem in integrating the legacy systems that house this ever-valuable data with the algorithms that need it to train. As an example, let’s say you’re looking to work with churn modeling, which essentially relates to predicting how long a given consumer will utilize a product, whether it’s renting a house or using a household appliance. The key idea is to predict how long they will use it until they move into a new house or purchase a new product. Often, legacy data systems are fragmented. Customers’ personal information may be stored in “Base A” while product testing information may be stored in “Base B.” This means that, before an algorithm can be implemented and process automation performed, this data needs to be processed and combined to create the connections it needs. This is a simple example with two data systems and minimal work but, sadly, in the case of more successful and predictive artificial intelligence, it takes a lot more than two data systems merging for proper implementation and automation.
The message is clear: if the implementation of AI is something on the horizon for financial services, you must get your data in order first.