Artificial Intelligence: Stopping the Big Unknown in Application, Data Security

Artificial intelligence, particularly large language models of the GPT type, were the talk of the town during last week’s Black Hat and Def Con in Las Vegas. But even the experts disagreed to what extent AI changes the security posture companies should take, from protecting internal data to developing applications.

Early in the first day of Black Hat’s briefings on Wednesday, keynote speaker and Arm reverse engineer and exploitation expert Maria Markstedter paraphrased OpenAI CEO Sam Altman’s quip that “AI will most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

“Move fast, break shit,” Markstedter recommended, quoting a frequently mentioned Black Hat motto. “That’s why products always initially lack security functions and, in the past, companies had to be forced into investing into security.”

Right now, generative AI is about text, but multimodal AI is coming, Markstedter warned. It will be able to handle not just chat, but responding to live video, doing sentiment analysis — [and] not just on the voice, but on a person’s body language, she said. Maintaining anonymity will be critical for these systems, and these new AI capabilities will mean at a minimum rethinking our ideas about data security, she added.

Impact on Security

No one is sure how, where or when this will impact security. First, it already is affecting data security, Markstedter pointed out — all it takes is one employee copying data into a black box AI chatbot. But it’s not certain how it impacts other aspects of security, yet. For instance, LLMs make it possible to generate malicious code, but they can’t execute the code itself, according to two former OpenAI employees who spoke about its potential use in security.

There was one thing everyone did seem to agree on: Banning AI is a short-term solution that won’t pass muster for long because businesses want to adopt AI technologies. Eventually, AI security will require embracing large language models and other AI technology. Those who don’t embrace it will fall behind, security experts repeatedly warned at the conference.

“I have shown that integrating autonomous agents is way too risky, or we accept that they become a reality and develop solutions to make them safer,” Markstedter said in her keynote talk on Wednesday. ”This is our chance to reinvent ourselves [and] our security posture. And so for the next stage of security challenges, we need to come together as a community and foster research because our community’s rather fragmented,” she said, explicitly referring to the fact Twitter/X has lost its status as a centralized repository for developer conversations.

Developers have a chance to be part of the solution, though: The defense organization DARPA challenged Black Hat and Def Con attendees to help create a next-generation AI-based responsive security system. Some of you may remember DARPA from ARPANET, which brought us such fabulous hits as the beginning of the internet and the implementation of the TCP/IP protocol suite.

Perri Adams, a program manager in DARPA’s Information Innovation Office, announced the competition in a last-minute addition to Wednesday’s Black Hat keynote event.

“Black Hat is where certainly industry leaders and experts gathered every year to drive innovation for defense. Cybersecurity is always a race between offense and defense,” Adams said. “And there’s no silver bullet here. But recent technological advances do offer prompts to new ways of ensuring that we can [keep] defense one step ahead.”

AI Cyber Challenge

Specifically, Adams is referring to AI’s potential use as a real-time response to threats at scale. The two-year AI Cyber Challenge (AIxCC) is aimed at creating a new generation of cybersecurity tools, the release noted. DARPA is awarding a total of $20 million in prizes to the teams that create the best systems.

AIxCC will allow two tracks for participation: the Funded Track and the Open Track. Funded Track competitors will be selected from proposals submitted to a Small Business Innovation Research solicitation. Up to seven small businesses will receive funding of up to $1 million to participate. Open Track competitors register with DARPA via the competition website and proceed without DARPA funding.

In 2024, DefCon will determine the top five companies, who will proceed to a second round of experimentation with an additional $2 million in funding. The 2025 Def Con winners will receive $4 million for first place; $3 million for second place; and $1.5 million for third place.

This isn’t just a government event. AIxCC is also working with leading AI companies who will share their AI expertise, including Anthropic, Google, Microsoft and OpenAI. The Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, will serve as a challenge advisor to guide teams in creating AI systems capable of addressing vital cybersecurity issues, such as the security of the nation’s critical infrastructure and software supply chains.

The AIxCC competitions will be held at DEF CON and will consist of two phases: the semifinal phase and the final phase. The semifinal competition and the final competition will be held at DEF CON in Las Vegas in 2024 and 2025.

Group Created with Sketch.



[ad_2]

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *