
Ilona Cohen discusses current gaps and future opportunities in AI and cybersecurity regulation.
In a conversation with The Regulatory Review, technology law expert Ilona Cohen shares her perspective on how best to regulate developing cybersecurity and artificial intelligence (AI) technologies.
Cohen argues that regulators often focus on today’s specific technologies, instead of designing security measures that can withstand future technological change. Cohen offers potential steps that policymakers and regulators can take to develop more comprehensive reforms, including adopting baseline standards for all AI systems as well as requiring companies and agencies to perform ongoing tests for security vulnerabilities. Cohen also emphasizes the need for regulators to engage with technology companies as they develop regulation.
Cohen is the chief legal and policy officer at HackerOne, a cybersecurity company that assists organizations with finding and fixing security vulnerabilities in their technology systems. Cohen previously served as the chief legal officer for the health care technology company, Aledade. Prior to her work in the private sector, Cohen was general counsel at the Office of Management and Budget. She also advised President Barack Obama on national security matters and government cybersecurity programs as a special assistant and associate counsel to the President.
The Regulatory Review is pleased to share the following interview with Ilona Cohen.
The Regulatory Review: What are the biggest challenges in AI and cybersecurity regulation today?
Cohen: Both cybersecurity and AI policy are deeply intertwined with our country’s national security and economic goals. Any discussion of regulation needs to consider the impact on these complex and important objectives. Regulators must also keep pace with technological change. It can take years to develop regulations in Washington or a European capital. While those regulatory discussions take place, technology advances rapidly, and that pace may make a proposed policy irrelevant.
Effective regulation needs to encourage the types of behavior that make systems safe and trustworthy as companies pursue innovation. In the cyber and AI space, regulators need to encourage a proactive and comprehensive approach to identifying and fixing vulnerabilities and incorporating security and continuous monitoring of AI and information technology (IT) systems to ensure that these systems are secure and operating as intended.
TRR: What role, if any, should technology companies play in developing AI and cybersecurity regulations?
Cohen: Technology companies are critical voices in the policy debates on cybersecurity and AI. The development of AI and cybersecurity regulations, for example, benefits from public comments provided by a range of stakeholders, including tech companies. Companies often have earlier and more detailed knowledge than the government on technological advances, cyberattacks, and certain vulnerabilities, and they can explain the potential impact of regulations on innovation.
On the cyber front, this expertise is why it is so important for Congress to renew the Cybersecurity Information Sharing Act of 2015, the law that facilitates timely information sharing by companies about cyber threats. This law, which has been essential to our nation’s cyber defense and response, will expire at the end of January absent action by Congress. Without it, companies lack certainty about legal protections for sharing information. In the absence of such protections, companies may hesitate to disseminate cyber threat information—leaving both private sector and government networks more exposed to exploitation.
TRR: What internal governance structures should companies establish to comply with evolving technology regulations?
Cohen: At the highest level, corporate boards increasingly focus on cybersecurity and AI governance. These issues can have a significant impact on a company’s financial performance, supply chain, operations, and reputation. Companies are focused on proactively managing risk, which involves a different mindset than one focused solely on compliance. Regulatory compliance is one area of risk management, but given the ever-expanding importance of technology even in more traditional industries, cybersecurity and AI require a more comprehensive approach.
Boards can hold senior executives accountable for information security and compliance programs and participate in simulated cybersecurity incidents—tabletop exercises—to test response plans. Operationally, companies are outlining clear roles and responsibilities across their organizations, with attorneys and security professionals often leading broader initiatives that involve training, testing, and other readiness responsibilities. Many companies are also engaging outside security researchers to proactively test technology systems for vulnerabilities and to ensure compliance with regulations.
TRR: How can regulators balance goals of promoting product innovation with consumer protection?
Cohen: Regulators should carefully consider the impact of proposed rules on innovation, the economy, national security, and consumers. Regulation can promote the adoption of strong security measures that companies can embed from the outset and scale as they grow and innovate. Secure and trustworthy IT systems and AI models are a prerequisite for sustainable innovation. That’s good for companies and good for consumers.
TRR: The Trump Administration recently released a plan to remove “red tape and onerous regulation” for AI systems. What are the benefits and drawbacks of a deregulatory agenda?
Cohen: Although the Trump Administration’s plan does not impose new regulations on private companies, it does contain guardrails intended to make the systems used by the federal government stronger and more trustworthy. Examples include the plan’s call to extend to AI-specific vulnerabilities existing cyber vulnerability sharing mechanisms, according to which federal agencies encourage the public to provide feedback on potential security issues in their technologies. The plan also encourages federal agencies and academic partners to coordinate AI-focused testing “hackathons,” in which participants test AI tools for transparency, effectiveness, and security vulnerabilities.
AI developers face intense scrutiny from customers, regulators, legislators, and the public on their models and how they are used. In this environment, regardless of regulations, the companies that succeed will be those that earn trust by ensuring that their AI models are secure, perform as intended, and deliver benefits to their users and society. Through the National Institute of Standards and Technology, the U.S. government has also developed a cybersecurity framework and an AI risk management framework to help organizations voluntarily manage these risks.
TRR: What steps can regulators take to address potential use of AI systems to facilitate cyberattacks?
Cohen: AI has accelerated the pace and magnitude of cyberattacks and broadened the attack surface of many companies. Technology leaders have encouraged the adoption of best practices from cybersecurity to respond to these challenges. These solutions include increased proactive security measures such as adversarial testing—also known as red-teaming—according to which organizations deliberately seek to bypass safety measures in their AI models to ferret out cybersecurity vulnerabilities before malicious actors can find them.
The federal government also has opportunities to strengthen its supply chain and to expand protections to its use of AI systems. Federal agencies already provide open channels for members of the public to disclose security vulnerabilities that they find. Congress is considering requiring federal contractors to adopt these disclosure programs to boost protection throughout the government’s supply chain. The government and the private sector should expand the coverage of these programs to include AI vulnerabilities and unintended outcomes. They should also adopt tools such as AI red-teaming, bug bounty programs, and continuous monitoring and life-cycle audits of AI technologies. Doing so will better allow the government and private sector to ensure that the AI systems they are using are safe and trustworthy.
TRR: How has your work in government informed your perspective on regulating AI and cybersecurity?
Cohen: One lesson is that cybersecurity and responsible development of AI are bipartisan issues that are critical to national security and the economy. There is more continuity on these issues compared to others across administrations and more bipartisan cooperation to advance solutions that protect critical infrastructure, enhance economic competitiveness, and keep pace with technological change.
In addition, the incidents that occurred while I served in the Obama White House—such as the breach involving more than 20 million personnel records at the U.S. Office of Personnel Management—underscored the need for broad adoption of proven best practices. The supply chain has often been the “attack vector” that leads to a breach. Since an organization is only as secure as the least secure element embedded in its systems, regulation can play a role in ensuring all entities adhere to the same baseline standards. This lesson is likely to be as relevant to emerging AI systems as it has been for traditional cybersecurity.


