Why Risk Engineering Will Be the Most Important Skill for the Next Generation of Developers

Why Risk Engineering Will Be the Most Important Skill for the Next Generation of Developers
X

The most dangerous developer in 2030 will not be the one who cannot build with AI. They will be the one who can build it, but cannot control its failure modes.

In the pre-AI era, software developers were rewarded for building code features fast. Flash forward - today, with AI - developers are held even more responsible for what happens when software behaves unexpectedly at scale.

Risk engineering for software involves the process of designing software that is constantly monitored, prepared for the possibility of failure and has a rollback mechanism. We want to prevent bad outputs, build kill switches, legitimize models before deployment and most importantly - keep humans in the loop in case something goes wrong.

Now, this is where AI changes the game. Traditional software was usually more deterministic and specification bound whereas AI based software is based more on probability and can be open-ended. So, we need to account for some level of unsafe ambiguity. In the course of our

work in automation trading flows at MarketAxess, a leading institutional trading platform in the fixed income space - we come across many such problems everyday. For example and putting it simply – we receive an order to buy a particular bond and there are multiple sell orders on the opposite side. The order needs to be split in such a way that it gives the best outcome to the client. That splitting involves the use of quant and AI models which can vary on every occasion based on numerous market conditions. However, while we test a flow, every possible scenario cannot be accounted for. This is where the principles of risk engineering come in - asking us to prepare for every form of failure and rollback mechanism that may or may not be needed.

The Financial Stability Board said rapid AI adoption in finance raises the need for stronger monitoring and sufficient supervisory and regulatory capabilities. The Bank of England, too emphasizes that authorities need a flexible, forward-looking approach to track AI-related risks to financial stability.

The National Institute of Standards and Technology’s AI Risk Management Framework organises this kind of work around 4 functions: Govern, Map, Measure and Manage. This should be the focus for the next-generation developer. Risk is no longer a downstream review function, it is now baked into architecture. Everything from access controls, to fallback behaviour, model evaluation, data lineage, audit logs, resilience testing, etc. will become imperative skills which every software developer should start paying critical attention to.

The next generation of developers will still continue to need the same skills as before. But the ones who matter most will be the ones who can build systems that fail safely, recover quickly, and earn trust at scale. In an AI-native world, risk engineering is no longer a niche. It is – The Job.

(The author is Tanmay Gulati, Software Engineer at MarketAxess)

Next Story
Share it