What it takes to make AI useful in enterprise systems: A conversation with Akash Jindal

Akash Jindal discusses building practical, trustworthy AI for enterprises, sharing insights on AI adoption, governance, ethics, and real-world impact.
As artificial intelligence moves from experimentation to everyday use in enterprises, the
challenge is no longer about building smarter models, but about making them practical,
trustworthy, and impactful. Akash Jindal has spent years working across this shift, leading
the development of data-driven products in financial services, marketing intelligence, and
customer insights. Operating in environments shaped by regulation, risk, and real-world
decision-making, his work has focused on aligning AI systems with business workflows,
accountability, and measurable outcomes. Hans India engaged in a conversation, as Akash
shares how that perspective has shaped his approach to building and delivering AI that
organisations can actually use.
Hans India: Akash, you’ve been working at the intersection of AI and product for quite a
while now. What originally shaped the way you approach AI delivery?
Answer:
Honestly, my thinking came from watching a lot of technically impressive work fall flat
because nobody knew what to do with it. I started out in analytics, and it didn’t take long to
notice the gap between “the model is great” and “the model is actually useful.” If someone
can’t apply an insight, or doesn’t trust it, then the technology isn’t solving anything.
As I moved into product roles, especially in financial services, that lesson only grew louder.
These environments are full of pressure points: regulation, risk, and decisions that carry real
consequences. You learn very quickly that clarity, trust, and workflow fit matter just as much
as model performance. Maybe even more.
Hans India: And in terms of problem spaces, where have you focused your energy lately?
Akash:
A lot of my work revolves around customer intelligence, operational efficiency, and risk.
They’re interconnected, and they all matter.
When it comes to customers, banks have so much data but very little clarity. Helping teams
interpret behavior responsibly goes a long way toward better service.
Operationally, many processes are still painfully manual. If AI can take the routine work off
people’s plates, they can spend time on things that actually require judgment.
Risk and compliance, though, that’s the heart of it. Regulations shift constantly, and
organizations need systems that can adapt, surface issues early, and provide explanations
that stand up to scrutiny. Doing that well builds trust.
Hans India: Adoption is a huge challenge in AI. What’s your approach to making sure
people actually use what you build?
Akash:
I start with how people work today, not how we wish they worked. If a tool doesn’t fit into that
rhythm, adoption becomes a battle.
Transparency is huge. People need to see not just what the system recommends, but why.
Especially in finance.
I’m a big believer in small, meaningful wins early on. Ship something narrow but valuable.
Show the impact. Build momentum.
And I always measure real-world outcomes, not just model metrics. Time saved, accuracy
improved, risk reduced, those are the numbers that matter.
Training and support play a bigger role than people expect. Adoption isn’t automatic. You
need to help users feel comfortable.
Hans India: How do you balance ethics and delivery speed?
Akash:
People treat ethics like a tax. It’s not. Done properly, it keeps you from digging yourself into
holes later.
We look at fairness, transparency, and compliance from day one. That means having legal
and compliance in the room early, identifying risks upfront, and documenting the model as
we go.
When you build with clear governance, you actually move faster. There’s less uncertainty,
less rework, and fewer surprises. It’s the shortcuts that cost time.
Hans India: Collaboration across disciplines is always hard. What works for you?
Akash:
Shared ownership. The moment only one group feels responsible, things fall apart.
We run cross-functional kickoffs, weekly working sessions, frequent demos, anything that
keeps people aligned and talking.
I also like rotating roles. A data scientist joining a product sprint, an engineer sitting in a
compliance review, it builds empathy that you simply can’t get from documents.
And clear decision rights help immensely. People need to know who owns what.
Hans India: Finally, what changes do you see coming in the next few years for AI in
financial institutions?
Akash:
Three big ones.
AI will spread across the organization. Not just data teams, everyone.
We’ll move from systems that suggest actions to systems that can take them, within limits.
That shift introduces a whole new layer of design challenges.
And regulation will get significantly stronger. Institutions that invest in governance now will
have a head start.
The flashy stuff is interesting, but the real advantage will come from having the right
foundations, data, governance, skills, and clear alignment on what problems actually matter.







