By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts Cookies Policy.
The “Big Four” trust Artificial Intelligence: How Data Expert makes AI Tools More Transparent for reputable corporations
While companies are doubtful about the quality of artificial intelligence implementation, Sree Hari Subhash is introducing innovation and accountability that artificial intelligence must deliver, increasing the speed and quality of turnover
At the 2025 World Economic Forum, experts from various fields discussed arguably one of the biggest issues for the corporate sector today: trust and transparency in how artificial intelligence is used. Given that algorithms are determining how we process our financial transactions, manage our risk, and make business decisions, companies are looking for ways to be able to communicate their decision-making as transparently and securely as possible. Whether we apply for a loan, receive medical advice, or even get a job interview with the company, algorithms are already making decisions that affect all participants directly. That is why the question of trust and transparency in AI is now at the Centre of global attention. Can we be sure that these systems are fair, reliable, and safe?
Sree Hari Subhash, a senior data processing engineer at KPMG, one of the “Big Four” companies, and a Microsoft-certified artificial intelligence specialist, knows how to create a reliable AI tool and avoid business risks. He is working on a solution that makes AI tools more transparent and accountable: a step toward ensuring that people and companies alike can trust the technology shaping their future.
The way to finance is through experience
It is quite difficult to make an AI tool work safely, and people can trust it, especially when it comes to a financial institution. Dealing with such a large and important area, such as the financial market, where a slight mistake could cost millions of dollars for companies and customers, will take skill and knowledge. To gain trust in machine learning and artificial intelligence, painstaking work is needed to develop a tool that actually works and also makes sense. Sree Hari Subhash has developed a career at the intersection of data engineering and machine learning. While studying at the university, he was engaged in research related to predicting Alzheimer’s disease using machine learning algorithms, which require both levels of accuracy and verifiability. He then worked with these paradigms in a corporate setting, including in projects for large corporations.
In order to build confidence in artificial intelligence and to properly deploy it at scale in these architectures, particularly where finance is concerned, it is critical that the development phases are well documented and reproducible. The expert started working on his main project long before joining KPMG, first working at American Express, one of the largest financial companies, through Tata Consultancy Services, where he created digital solutions for fintech. It was in this job that he realized how careful the development of tools for large companies requires.
“We created APIs and microservices that integrated many corporate services and sped up handling large volumes of data. Consequently, the work is expedited and the services are much more economical for people to use,” Subhash commented.
Moreover, Subhash gained pragmatic insight into how technology impacts millions of users in the financial sector. Without the developers, the service would have continued to work for a long time, and data would have been transmitted more slowly, which previously led to system failures and numerous complaints. Therefore, the market requires more and more implementations, although it does not always treat them with confidence.
Data architecture for clear solutions and big companies’ trust
Hari Subhash now has the position of building systems that can scale for data pipelines and analytics systems for decision-making through data at KPMG, one of the “Big Four” global audit and consulting companies. His current development is an architecture for transparent data management considerations and practices in Microsoft Fabric and Databricks, where relevant documentation of all the data management practices is recorded at each step from the data extraction to the relevant analytics. This helps engineers avoid “black box” circumstances when utilising algorithms with no verifiability or visibility, and creates the ability to audit the algorithms.
In AI, “black box” is a problem of neural networks. In short, individually and collectively, a neural network will change or adapt millions of parameters and establish connections on its own. Even when the system works perfectly, developers themselves often cannot fully explain why a specific decision was made. This lack of transparency makes debugging errors, ensuring fairness in the system, and establishing trust in the system extremely difficult. Transparent data practices, like the ones Subhash is developing, address this challenge directly by allowing companies to see not only the result of the algorithm, but also the path it took to get there.
“Conceptually, this system combines Microsoft Fabric: Lakehouse, Data Factory, Synapse Real-Time Analytics with Databricks and Apache Spark capabilities. In simple terms, what is groundbreaking about this development is that this system does not simply provide the company with an answer. It shows simultaneously the source of the data, the processing, and the actions taken on it. The system is becoming transparent, and trust in it is increasing,” Hari Subhash explained.
He builds data pipelines in such a way that not only can companies apply their model and have the insights returned quickly, but they can also know the source of the data and how it was processed. This is directly related to the key topic of World Economic Forum 2025: explainability and building trust, with the use of AI, which must be transparent. When regulators in financial services are increasing the burden to comply with algorithmic compliance, these developments become an important bridge between innovation and accountability.
Trust can only be gained through recognition
The financial sector currently finds itself in a time of transition. On the one hand, banks and consulting firms continue to actively deploy AI solutions for forecasting, risk assessment and process efficiencies. On the other, a breakdown of trust could cost billions of dollars and has the potential to erode public trust in technology altogether. With respect to AI, large companies could be working on AI solutions passively or bringing in extremely qualified employees for hire.
Sree Hari Subhash’s job relies on embedding the values of transparency and explainability into every data architecture. Based on his time with KPMG and other large organizations like American Express, these types of solutions clearly work in practice – enabling organizations to deploy AI without having to fear regulators or be subject to an unexpected disruption. To discuss trust, it is not enough to have an engineering solution; independent validation of an expert’s abilities is equally important. Sri Hari Subhash has been awarded a number of international, prestigious certifications, including, for example, Fabric Analytics Engineer Associate (DP-600) and Azure AI Fundamentals (AI-900), which provide independent validation of his skills building solutions on current data and AI platforms that the biggest companies in the world are using now, so the largest players in the financial market trust him.
In addition, in 2025, he was invited to the jury of the award, which annually highlights significant achievements in AI: projects, teams, initiatives, and developments. The jury is carefully selected from among AI experts with proven experience and professional profiles. Subhash’s experience evaluating innovative AI solutions provides him with a unique lens: he creates the systems but is also capable of critically evaluating the work of his peers, following comparisons to best-practice work in international practices.
Personal specialization and systematic data enable the solution of one of the most pressing problems of our time: a shake of distrust towards AI. In a world where artificial intelligence is increasingly becoming the judge of value, where finance and business are in the plan, artificial intelligence is working as a transparency and accountability reference. When discussing global AI management, experts formulate goals and strategies, but it is the data engineers working in the system who turn these ideas into practice, building tools that can be trusted.
Also Read:
For breaking news and live news updates, like us on Facebook or follow us on Twitter and Instagram. Read more on Latest Money News on India.com.