Since its debut towards the end of 2022, ChatGPT has taken the world by storm, sparking the imagination of future-minded individuals as well as those more risk-averse among us. With it, important discussions have emerged around artificial intelligence (AI) as a whole – its beneficial applications, negative repercussions and ethical considerations. The technology will undoubtedly be revolutionary for all industries, notably software development.
AI will change how information is generated from, flows through, and is acted upon in the software development lifecycle. This increased information flow and visibility could make software development far more reactive and integrated than it already is. Instead of a line of code taking two weeks to become a tested and releasable feature, the accelerations and efficiencies provided by such smart tooling could take that time down to a day for an average development team.
Indeed, a developer today might write a piece of code that is then committed back into their tool chain. That commit is bundled with the rest of a release and tested with lightweight automated tools for correctness and functionality before being sent to a quality assurance (QA) tester, who might get to those tests in a few days. If the tests pass, it goes to the next step towards release. If not, it is sent back for rework and resolved a couple of days later. Using AI to generate a QA test case and fix the code in case of failure could, however, eliminate that rework loop and trim days from that single step.
Moreover, AI can supplement some of the work of human developers, architects, or QA testers. For instance, developers could interrogate an AI tool trained on industry best practices, regulatory requirements and articles to create a list of security requirements they must meet when designing new systems or software. Equally, AI could be leveraged to generate use cases that exercise the functionality described in the security requirements for quality assurance testing purposes.
Development teams are measured against many different metrics, but one that is nearly universal is the amount of functionality they add to an application. Using AI to assist in building features will either allow them to meet their existing functionality targets while focusing on other areas such as quality, security and maintainability; or allow them to increase the number of features they deploy altogether.
This will enable teams to decrease the time-to-market in industries where speed is key or build better, more reliable software where uptime is the primary focus. In other words, organizations that carefully and deliberately integrate AI into their workflows will have an edge over those who fail to adapt.
Supervising AI
Nonetheless, all use cases for AI will have to be supervised for the foreseeable future; at least, as long as the issue of hallucination, or an AI’s tendency to confidently assert facts that don’t agree with the reality we live in, remains an issue. There are certain types of “attacks” that are more of an exploitation of an oversight or a mistake than they are a skilled hack executed by the best of the best, and exploiting unsupervised AI or software written by unsupervised AI will be more of the former.
Take the swift adoption of cloud technologies as an example. When cloud hosting was growing rapidly, every few weeks there would be a headline about a data breach where someone found private or sensitive data in a cloud data storage bucket configured with its default allow-all access rules. Every new technology has had its specific vulnerabilities, from personal computers to web servers to smart phones, and AI-provided services will continue that trend.
Whether it’s fooling the AI’s decision-making process, tricking the AI into disclosing secrets or attacks against the AI’s underlying software platform, there will be new vulnerabilities and security concerns posed by the software itself.
It is also likely that once attackers learn the common software flaws written by AI subject to hallucinations, they will rigorously and routinely exploit those flaws wherever they find them. These could be oversights or missing controls where the AI didn’t include them, or additional functionality that’s not needed for the application to function but was included anyway. Once researchers understand the types of hallucinations different AI models experience, they can predict what flaws will be present in the software written by those models.
To summarize, there is no denying AI’s importance in facilitating safer and more efficient software development. Turning a blind eye to these recent developments and choosing to continue ‘business as usual’ could be fatal to business survival. However, that is not to say that we should take a hands-off approach to this latest tidal wave of change. Rather, developers, architects and QA testers should remain vigilant over the work produced by AI; seeing it as a tool to enhance the processes they have in place instead of taking over those processes altogether.