The AI Boom and the mounting cost of Technical Debt

CyberNotes
3 min readNov 4, 2024

--

Image from mightybytes.com

From autonomous cars and automation, to voice assistants like Alexa and Siri and perhaps most recognizably, ChatGPT, AI has been nothing short of transformative. However, behind all this hype, there lies a less glamorous reality: The buildup of technical debt.

I’m part of a team of research software developers(RSDs) at Michigan State University, and this issue came up in our discussions last week. There is increasing concern about the direction in which the software engineering landscape is headed, with the potential for a difficult 5–6 years ahead as the consequences of accumulated technical debt begin to unfold.

But first things first:

Just what is technical debt?

Simply put, this is cutting corners to meet a goal. This can take many different forms, such as relying on ChatGPT to write code, or doctors using a bot for diagnoses instead of requesting scans or other thorough examinations.

The effect of this then becomes that in the long run, more resources will be needed to correct the issues introduced by taking such shortcuts. Ultimately, they prove costly, detrimental and even unsustainable.

Why is this a problem?

You probably see where the problem lies already! But nonetheless, here we go:

As is stands, we have had tonnes of people losing jobs to AI. Customer representatives have been replaced by chatbots, some organizations such as Salesforce and Duolingo, have frozen hiring to focus on a more “AI-Centered approach”. But what is that really?

One thing we need to understand is that AI is just as good as the people that train it. The garbage that is fed into these models is often what we see in the form of repetitive, non-original responses, far from what i’d term as intelligence.

Then there are the “human-like” chatbots that can serve you coffee and even become your girlfriend if you dont have one. It looks like everyone everywhere is working towards becoming a world leader of some sought, by making sure they are not left behind by the AI boom, often with little real understanding of what they are actually creating.

Why we need to be concerned

The next 5–6 years will reveal where this AI trajectory ends up. We may face systems that break down because AI created them with fundamental flaws, along with software developers who lack the skills to write, test, or document code independently after relying on AI tools, or chatbot creators facing legal battles because their AI girlfriends encouraged teenagers to commit suicide like this case.

What should be done Instead?

My goal is not to dismiss AI entirely. Far from it. Although I have had my run-ins with premium ChatGPT and similar AI platforms, I concede that some level of technical debt can be beneficial. However, the kind we’re seeing now raises concerns.

Here are some suggested practices for using AI effectively:

  • Document Extensively: Clearly indicate when and why AI-generated code/content is used. Include detailed comments on its purpose, functionality, and any unique assumptions. This transparency helps future developers understand the context and limitations of the code.
  • Set Boundaries for AI Use: AI tools can speed up development but may introduce complexity. Identify specific tasks where AI is valuable (e.g., generating boilerplate code or routine tasks) and avoid relying on it for complex, critical functions where precision is essential.
  • Refine AI: Avoid using AI-generated code as-is. It is good practise to review, refactor, and extensively test AI outputs.

So there. The next time you think “Here we go! AI to the rescue”! THINK AGAIN!

#AI #Cybersecurity #Technical Debt

--

--

CyberNotes
CyberNotes

Written by CyberNotes

Data Science/Cyber - Student at Michigan State University.

No responses yet