A Tool for Checking Software Security Risks May Not be Too Far From Reality


A Concordia professor is using machine learning to improve software security. He wants to create a tool that will enable developers to check their code for security risks.

In this digital era, a lot of our daily life activities are dependent on software. But poor software code can damage computer systems or expose its users to data breaches from malicious actors.

Yann-Gael Gueheneuc, a professor in the Department of Computer Science and Software Engineering at Concordia University in Montreal, is using machine learning to improve software security.

He is teaching machine learning algorithms to develop their own rules for software quality — what’s acceptable and what might represent a security risk to the user.

Gueheneuc wants to create a tool that software developers can use to check over their code when they’ve finished writing it — sort of like spell check in Microsoft Word.

Improving software quality with machine learning

“We’re researching how to improve software quality, and one of the problems with software quality is we have to have clear, strict rules to enforce it. But we cannot describe very explicit rules to measure quality because there are too many factors to take into account,” the professor was quoted as saying in the Concordia University website.

According to Gueheneuc, the machine learning algorithm, at present, generates a list of pieces of software, saying “this piece is pretty good” and “this piece is pretty bad.”

Eventually the algorithm should be able to tell us, “this piece of code is unsafe. It’s not secure. You have to rewrite, redesign or modify it to make it more secure,” he added.