
With the growing interest in generative artificial intelligence (AI) systems around the world, researchers at the University of Surrey have created software that is able to check how much information the AI is seeding from an organization’s digital database.
Surrey’s verification software can be used as part of a company’s online security protocol, helping the organization understand if the AI has learned too much or even gained access to sensitive data.
The software is also able to determine if the AI has identified and is able to exploit flaws in the software’s code. For example, in the context of online games, it could determine whether an AI has learned to always win at online poker by exploiting a coding error.
Dr Solofomampionona Fortunat Rajaona is a Research Fellow in the Formal Privacy Verification at the University of Surrey and lead author of the paper. He said:
“In many applications, AI systems interact with each other or with humans, such as self-driving cars on a highway or hospital robots. Knowing what an intelligent AI data system knows is an ongoing problem that it took us years to find a successful solution to.
“Our validation software can infer how much the AI can learn from their interaction, whether they have enough knowledge to enable successful collaboration, and whether they have too much knowledge that would break privacy. By being able to validate what the AI has learned, We can offer confidence to safely unleash the power of AI in secure settings.”
The study on the Surrey program won Best Paper Award at the 25th International Symposium on Formal Methods.
Professor Adrian Hilton, Director of the Institute for People-Centered Artificial Intelligence at the University of Surrey, said:
“Over the past few months, there has been a significant rise in public and industry interest in generative AI models fueled by developments in large language models such as ChatGPT. Building tools that can verify the performance of generative AI is essential to support its secure and responsible deployment. This research is An important step towards an important step towards preserving the privacy and integrity of the datasets used in training.”
More information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346