your conclusion is definitely correct. the good thing is that if these LLMs can be used offensively to find exploits, they can also be used defensively to find (and potentially even fix) the same ones.
I’m still of the opinion that AI is a tool that amplifies the abilities of the user, so there will be a mismatch of capability depending on that.
with that argument in mind, open source has the potential to be a huge receiver of benefit, if these tools are used to aid development in that way.
overall, what I think that means (what I hope it means) is that we’ll see a much more broad interest in learning secure development as AI research progresses and these tools become more widespread in usage.
your conclusion is definitely correct. the good thing is that if these LLMs can be used offensively to find exploits, they can also be used defensively to find (and potentially even fix) the same ones.
I’m still of the opinion that AI is a tool that amplifies the abilities of the user, so there will be a mismatch of capability depending on that.
with that argument in mind, open source has the potential to be a huge receiver of benefit, if these tools are used to aid development in that way.
overall, what I think that means (what I hope it means) is that we’ll see a much more broad interest in learning secure development as AI research progresses and these tools become more widespread in usage.