The rise of generative artificial intelligence has ushered in a new era of potential and challenges in a world that has become more linked. Deep learning algorithms and enormous volumes of data enable generative AI systems to develop and innovate across a wide range of fields, which push the limits of creativity and problem-solving.
How Cybersecurity Is Crucial in the AI Age
As this technology advances, addressing the underlying dangers and weaknesses that could limit its usefulness becomes more crucial than ever. Cybersecurity has become indispensable for protecting people, companies, and even entire countries from possible dangers. It’s essential for keeping data integrity, confidentiality, and availability. Strong cybersecurity measures are urgently needed due to generative AI’s unmatched data generation, analysis, and interpretation capabilities, which renders all entities involved vulnerable to abuse by bad actors.
Dr Shivani Shukla, Associate Professor of Business Analytics at the University of San Francisco and a researcher, is well aware of these issues. She also equally advocates utilizing the benefits of generative AI, especially for its development and implementation for healthcare and public infrastructure.
Creating Strategies to Mitigate Risks
As a generative AI and cybersecurity researcher, Shukla uses mathematical techniques in operations research, game theory, and computer science to advance in these fields. She also applies generative AI and cybersecurity techniques to realize their benefits across multiple industries. Her focus has been on infrastructure (public assets like roads, rails, ports etc.) and healthcare, two of the most critical issues for the US.
Shukla explains, “My work has enabled corporate teams to include cutting-edge technological advancements into their product offerings at scale, providing users with highly accurate and contextually relevant content. While academic research offers numerous benefits, my primary focus is on making these advances accessible to a wider audience, allowing them to take full advantage of the enormous potential this technology offers.”
However, with any technological development, there are concerns about ethics, bias, and the generation of fake information. In sensitive fields like healthcare, these issues could have serious ramifications.
“To address these concerns, I work closely with businesses to develop technology that mitigates these risks. While this can present challenges, it is essential to ensure that generative AI technology is used responsibly and ethically. My work with companies has allowed me to develop practical solutions that enable them to utilize the full potential of generative AI while maintaining a focus on responsible usage,” shares Shukla.
Within cybersecurity, the challenges presented by Gen AI are obvious. Some of these challenges include advanced cyberattacks, data poisoning, automated hacking, and general disinformation. Shukla’s efforts center on solutions that reduce network vulnerability within physical and non-physical systems.
Shukla’s work focuses on the quantification of changing network vulnerability in the presence of information sharing. In her PhD thesis, she explored cooperative game theory in this context. Her research was about theoretically developing stable, feasible, and unique solutions for cooperative game scenarios. Her expertise in game theory could help her work in the context of Generative Adversarial Networks (GANs), a class of machine learning frameworks prominently used in Gen AI.
This theory is relevant to the generator and discriminator components of generative adversarial networks (GANs) used in various generative AI models, including DeepFakes. In simple terms, GANs involve a two-player game where the generator creates images or text, and the discriminator determines if they are real or fake.
Cooperative game theory can help outline the payoffs in the adversarial training process. The cyber adversary could gain from successfully infiltrating a system, while the defender would gain from accurately identifying and neutralizing a threat. Shukla believes that if cyber threats can emerge from Gen AI, multiple AI systems can also form a coalition, and share information and resources to withstand threats. The answer to threats from Gen AI also lie within this technology. In scenarios where such multiple AI systems are trained simultaneously, cooperative game theory can illuminate the dynamics of multi-agent systems that can create strategies for robust defense mechanisms. By training generative models on legitimate and adversarial samples, the adversarial patterns can be possibly identified.
Cybersecurity measures must be integrated into the building blocks of a Gen AI development process to ensure the security of systems. Shukla’s research aimed to develop stable, feasible, and unique solutions for cooperative game scenarios, contributing to improved cybersecurity with implications for this field.
Controversial Stand on Generative AI
Dr. Shukla is an advocate of the open-source initiative and firmly asserts, “Improper regulations and expensive licensing could impede the growth of new companies or individuals developing technology. While it can be used maliciously, so can other technology.”
Furthermore, she emphasizes the importance of integrating security measures into AI systems right from their inception. Shukla elaborates, “This entails implementing secure coding practices, fortifying the system against adversarial attacks, and employing strategies to mitigate potential exploitation. Privacy-preserving AI techniques such as Differential Privacy and Federated Learning can also safeguard data confidentiality.”
Recognizing the substantial scale of these endeavors, Shukla highlights the necessity of collaboration across sectors and disciplines to address the cybersecurity challenges associated with generative AI effectively. This collaborative approach may involve forging partnerships among tech companies, academic institutions, non-profit organizations, and government agencies, collectively promoting responsible usage and cultivating secure AI systems.
Establishing the Future of Generative AI
Shukla acknowledges the ongoing discussions about the potential negative consequences of generative AI and the necessity for regulations: “There is enough being said about how generative AI can be harmful and the need for regulations. While I don’t disagree with those discussions and understand the cybersecurity risks involved, it is crucial to hear from more proponents of the technology, like myself, who are dedicated to utilizing it for beneficial objectives like improving healthcare and tackling the issues of aging public infrastructure.”
Aligned with this desire for a more open conversation about generative AI’s benefits across multiple sectors, Shukla also emphasized the necessity of establishing industry standards for responsible use: “Industry standards will need to be established, and open-source technology, which is already at par with OpenAI and Google’s tech, is here to stay. My work in the future will be on how these will evolve.”
As the digital landscape evolves, integrating cybersecurity with generative AI becomes an imperative synergy, ensuring a future where technology empowers rather than undermines collective well-being. Proponents like Shukla are dead set on unraveling the intricate relationship between cybersecurity and generative AI and exploring the path toward a secure and prosperous digital future.