
Former National Security Director General Paul Nakasone will join OpenAI's board of directors, the AI company announced Thursday afternoon. He is also a member of the Board's “Security and Safety” subcommittee.
The high-profile additions appear to be aimed at satisfying critics who think OpenAI is moving faster than is wise for its customers and humanity, releasing models and services without properly assessing or shielding risks.
Nakasone brings decades of experience from the Army, U.S. Cyber Command, and NSA. Whatever you think about the practices and decisions of these organizations, they cannot be accused of lack of professionalism.
As OpenAI increasingly positions itself as an AI provider not only to the tech industry, but also to government, defense, and major corporations, this kind of institutional knowledge is valuable not only for its own sake, but also as a balm for worried shareholders. (There is no doubt that the connections he brings to state and military institutions are also welcomed.)
“My commitment to OpenAI’s mission aligns closely with my own values and experience in public service,” Nakasone said in a press release.
This certainly seems to be true. Nakasone and the NSA recently defended the practice of buying data of questionable origins to feed surveillance networks, arguing there is no law against it. OpenAI claimed that rather than buying large amounts of data from the internet, it simply took it and that there were no laws against it if caught. They seem to be of one mind in asking for forgiveness rather than permission. If they actually ask for it.
The OpenAI release also states:
Nakasone's insights will also contribute to OpenAI's efforts to better understand how to use AI to enhance cybersecurity by quickly detecting and responding to cybersecurity threats. We believe that AI has the potential to provide significant advantages in this area to hospitals, schools, financial institutions, and many other organizations that are targets of cyberattacks.
So this is also a new market play.
Nakasone will join the board's Safety and Security Committee, which is “responsible for making recommendations to the full board on important safety and security decisions for OpenAI projects and operations.” It is not yet known what this newly created organization will actually do and how it will operate. That's because several senior figures working in safety (as far as AI risk is concerned) have left the company, and the committee itself is in the middle of a 90-day meeting. An assessment of the company's processes and safeguards.









