2024 AI Policy Forecast
Gregory C. Allen and Georgia Adamson | 2024.01.30
The Wadhwani Center for AI and Advanced Technologies’ 2024 AI Policy Forecast reviews macro developments in artificial intelligence in 2023 and presents the Wadhwani Center’s top policy issues to monitor in 2024.
The report covers a wide range of topics pertinent to AI, from international governance efforts to semiconductor export controls, with the aim of giving readers a comprehensive understanding of the key developments in 2023 and how they inform critical policy debates in 2024. The 10 policy questions that the report presents for 2024 will be featured in the Wadhwani Center’s research in the coming year. The vast scope of content in this annual report makes it clear that artificial intelligence will be central to policy and legal debates for years to come.
2023 YEAR IN REVIEW
A Timeline of Major Developments in AI in 2023
Jan 10
China’s Cyberspace Administration implements a new law to manage deepfakes, including by enforcing watermarks on AI-generated content.
Jan 23
Microsoft pledges a rumored $10 billion multiyear investment in OpenAI, claiming the future impact of AI technology will be equal to the PC or the internet.
Jan 25
The U.S. Department of Defense updates DOD Directive 3000.09, “Autonomy In Weapons Systems,” changing the original 2012 directive to reflect new technology capabilities in autonomous systems and AI.
Jan 26
The U.S. National Institute of Standards and Technology (NIST) releases the NIST AI Risk Management Framework, a set of guidelines for AI development, use, and evaluation aimed to enhance transparency and security of AI in businesses and organizations.
Jan 27
The United States and the European Union announce an agreement to accelerate joint AI research for solving global challenges in climate forecasting, agriculture, healthcare, critical infrastructure, and more. Bloomberg reports that the Netherlands and Japan will join U.S. efforts to restrict exports of semiconductor manufacturing equipment to China.
Jan 31
The White House launches the U.S.-India Initiative on Critical and Emerging Technologies (iCET), a partnership with India to advance technology and defense research and innovation and to ensure semiconductor supply chain resiliency.
Feb 2
OpenAI’s chatbot, ChatGPT, becomes the fastest-growing application in history, with 100 million users in January.
Feb 16
The Department of Defense first announces its “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” at the Responsible AI in the Military Domain Summit in The Hague, Netherlands, the first summit of its kind.
Feb 24
Meta’s large language model (LLM) “LLaMA” is announced for limited release.
Mar 8
The Netherlands announces plans to join the United States to restrict semiconductor technology to China.
Mar 14
OpenAI reveals its latest large multimodal model ChatGPT-4, which greatly outperforms GPT-3 and other available models in several areas.
Mar 21
Nvidia reports it has modified one of its top semiconductor chips, the H100, for export to China as H800 following updated U.S. export controls last year.
Mar 22
An open letter calling for pause to all frontier AI developments for six months due to potential catastrophic risk to society is published. Signatories include prominent CEOs and academics.
Mar 31
Japan announces it will join the United States and the Netherlands in restricting exports of semiconductor manufacturing equipment overseas.
Apr 11
China’s Cyberspace Administration reveals draft steps to manage generative AI content, including bringing content in line with China’s core values.
Apr 30
The G7 concludes Digital and Tech Ministers’ Meeting (April 29–30) in Takasaki, Japan, declaring member states’ commitment to an internationally cooperative, adaptable, and risk-based approach to AI governance.
May 1
Hollywood writers begin months-long strikes over issues including AI’s role in the creative industry.
May 4
CEOs of top AI developers OpenAI, Anthropic, Microsoft, and Alphabet meet with President Biden to discuss responsible AI innovation, including companies’ responsibility to make products safe.
May 16
OpenAI CEO Sam Altman and IBM vice president Christina Montgomery testify before Congress on the risks of rapid AI development following the quick rise of ChatGPT.
May 19
G7 leaders gathered in Hiroshima discuss inclusive governance for AI at the 2023 summit.
May 30
The nonprofit Center for AI Safety publishes a one-sentence statement arguing that mitigating extinction from AI should be a “global priority,” which is signed by top AI CEOs and developers, academics, and other civil society figures.
Nvidia briefly joins tech giants in trillion-dollar market valuation, with shares up over 200 percent since late 2022.
Jun 21
Senate Majority Leader Chuck Schumer announces SAFE Innovation Framework for AI Policy at CSIS.
Jun 22
The Department of Commerce announces new public working group to implement and build upon NIST’s AI Risk Management Framework created in January.
Jun 28
OpenAI sued by authors for copyright infringement after training ChatGPT on their works without proper licensing, raising wider copyright concerns around AI systems.
Jul 18
UN Security Council convenes to discuss AI risks for first time.
Meta launches open-source LLM “LLaMA 2.”
Jul 21
Leading AI firms including OpenAI, Meta, and Google make voluntary commitments to the White House for ensuring safe AI, including testing products before release and watermarking AI-generated content.
Jul 24
Japan’s export controls on 23 types of semiconductor manufacturing equipment go into effect.
Jul 26
OpenAI, Anthropic, Google, and Microsoft announce new industry body Frontier Model Forum, founded to promote the responsible development of AI and to share knowledge with policymakers.
Aug 8
Nvidia announces new cutting-edge semiconductor chip GH200, speeding up processing times for generative AI systems.
Aug 9
President Biden signs executive order banning U.S. investment in sensitive technologies such as AI in China for national security and competition reasons.
Aug 10
The Department of Defense reveals new AI taskforce “Lima” to oversee integration of generative AI capabilities into the department.
Aug 28
The Department of Defense unveils new Replicator initiative to accelerate procurement and fielding of all-domain autonomous and attritable military systems to compete with China.
Aug 29
Huawei releases new Mate60 Pro smartphone with 7-nanometer semiconductor chip, highlighting China’s technical advancements despite U.S. export and investment restrictions.
Aug 31
Chinese tech company Baidu releases AI chatbot Ernie Bot to the public in China.
Sep 1
New Dutch restrictions on exporting semiconductor manufacturing equipment to China go into effect.
Sep 13
First AI Insight Forum session is held on Capitol Hill. Led by Senate Majority Leader Chuck Schumer, the meeting convenes senators, prominent tech CEOs, and civil society figures to discuss U.S. government oversight of AI.
Sep 28
The National Security Agency announces a new body, the AI Security Center, to oversee AI adoption into U.S. national security systems.
Oct 9
China aims to boost its total computing power by 50 percent by 2025, Chinese ministry reports.
Oct 17
The White House announces new and updated measures to restrict AI and semiconductor technology exports to China and other countries, closing loopholes in 2022 control policies.
Oct 26
China accepts UK invitation to take part in the UK AI Safety Summit in November amid controversy.
UK government reveals plan to form the world’s first institute for AI safety.
Oct 30
President Biden signs Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
The G7 releases a statement on the Hiroshima AI Process on AI risks and the benefits of fostering an open environment for global collaboration on AI.
Nov 1
The United Kingdom’s AI Safety Summit opens at Bletchley Park, convening prominent political and technology leaders to discuss international cooperation in AI governance for two days (November 1–2). Twenty-eight countries and the European Union sign the “Bletchley Declaration,” announcing international commitment to AI governance and next steps.
Secretary of Commerce Gina Raimondo announces launch of a U.S. AI safety institute at the UK AI summit.
Nov 2
The Department of Defense releases its AI strategy, directing the accelerated adoption of advanced AI within the DOD.
Nov 15
Microsoft reveals custom-designed semiconductor chip with aim to cut high costs of AI products.
Nov 17
Chief Executive Officer Sam Altman is temporarily ousted from OpenAI by the company’s board, only to be reinstated five days later on November 21.
Nov 22
Media source Reuters gains knowledge of new OpenAI project “Q*,” rumored to be a breakthrough in developing artificial general intelligence.
Nov 24
Russian president Vladimir Putin says Russian AI strategy will be released soon to counter the Western and Chinese monopoly on AI at a conference in Moscow.
Nov 24
Meta announces restrictions on AI-generated content for advertising ahead of the 2024 election, including mandatory disclosures of AI-generated advertising to the public.
Researchers discover a weakness in ChatGPT, allowing it to reveal sensitive information by using a single prompt.
Nov 30
The U.S. Patent and Trademark Office announces a new Semiconductor Technology Pilot Program designed to encourage innovation in semiconductor manufacturing and support the CHIPS and Science Act.
Dec 6
Google launches AI model Gemini, the first model to outperform human experts on Massive Multitask Language Understanding (MMLU).
Dec 8
In a global first, the European Union establishes a landmark comprehensive AI regulation in passing the EU AI Act.
Dec 13
The Financial Times reports that generative AI is widely used by multiple political parties in Bangladesh’s 2024 elections as deepfakes and AI-generated misinformation circulate social media and news outlets.
Dec 30
The New York Times sues Microsoft and OpenAI for copyright infringement, claiming AI chatbots illegally used millions of articles for training.
Nvidia launches advanced gaming chip GeForce RTX 4090 D for Chinese consumers, adapted to comply with updated U.S. export controls.
2023 TOP TAKEAWAYS
The Wadhwani Center’s Key Takeaways from Developments in AI Last Year
U.S. senate majority leader Chuck Schumer
June 21, 2023
Now, friends, we come together at a moment of revolution, not one of weapons or of political power, but a revolution in science and understanding that will change humanity. It’s been said that what the locomotive and electricity did for human muscle a century and a half ago, artificial intelligence is doing for human knowledge today as we speak. But the effect of AI is far more profound and will certainly occur over a much shorter period of time.
Existential risk became a mainstream concern for AI governance.
Though the risk of AI leading to catastrophe or human extinction had been a focus for Elon Musk and many AI researchers in prior years, 2023 saw the issue become a genuine priority among global leaders and government policymakers. The shift was led by calls from high-profile figures in the private sector such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Tesla and xAI CEO Elon Musk, and AI “godfather” Geoffrey Hinton. In May, a coalition of over 350 leading AI experts and executives signed a one-sentence statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This statement immediately led to a rhetorical shift among global policymakers.
Concern about AI’s malign impact on civilization trickled down to the wider U.S. public. A survey of more than 20,000 Americans by YouGov in April 2023 reported that 46 percent were concerned about AI’s potential to cause human extinction and 69 percent supported a proposed six-month pause on AI development.
Similar anxieties about the potential catastrophic risk of AI echoed around Washington last year. At a congressional hearing in May, Sam Altman warned that AI could “cause significant harm to the world” and urgently called for greater regulation of the technology. IBM vice president and chief privacy and trust officer Christina Montgomery concurred, “with AI the stakes are simply too high,” and “what we need at this pivotal moment is clear, reasonable policy and sound guardrails.”
At the UK AI Safety Summit, 29 countries attending agreed to the Bletchley Declaration, which stated that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
The United States, the Netherlands, and Japan coordinated export controls to target China’s AI and semiconductor technology development . . .
In late January 2023, reports emerged that the United States had reached an agreement with the Netherlands and Japan for the two countries to impose new export controls restricting China’s access to chipmaking tools. Earlier, on October 7, 2022, the United States had imposed strict controls on exports of advanced semiconductor manufacturing equipment (SME) technology to China; however, as Japanese and Dutch companies produced important SME technology, such unilateral action had a limited effect. Details on what was included in these new restrictions were scarce until March 2023, when the two countries formally announced they would be moving forward with export controls on a wide range of semiconductor equipment and technology. Neither country explicitly mentioned China as the target.
Japan and the Netherlands capture 99 percent of the world’s market share of lithography steppers and scanners — crucial for state-of-the-art AI chips. Therefore, this was a major step forward in the U.S. mission to bar China from gaining the lead in the chip race. Still, other countries like Germany and South Korea are also significant producers in the semiconductor value chain, and the United States will need to persuade them both to get on board with controls if they want to continue to slow China’s technological advancements.
. . . However, their success in slowing China’s technological progress remains mixed.
Despite international efforts to prevent China from making significant technological advancements, the announcement of Huawei’s new Mate60 smartphone raised concerns throughout the national security community about the efficacy of the export controls. On October 17, 2023, the United States’ Bureau of Industry and Security announced updates to the October 7 controls. These updates included additional parameters for chips’ performance density, the restriction of dozens more items of semiconductor equipment, the expansion of licensing requirements to an additional 22 countries that the United States has an arms embargo with, and the addition of 13 companies to the entity list.
AI chatbots reached billions of users worldwide and continued growing rapidly in scale.
OpenAI’s ChatGPT reached over one million users in its first week of launch in November 2022. By November 2023, that number had skyrocketed to more than 100 million monthly active users. OpenAI’s success speaks to a wider public entrancement with AI chatbots around the world. Leading AI development companies launched several new large language models (LLMs) last year, including OpenAI’s updated ChatGPT-4, Meta’s LlaMa 2, and Google’s Gemini. The global demand for this technology has prompted many companies to develop chatbots trained on other languages, such as the Arabic model Jais and the Chinese model Ernie Bot, though the United States still leads in the worldwide development of LLMs.
Language models are getting bigger, both in terms of the data they are trained on and their parameters (GPT-4, for example, is rumored to have up to one trillion parameters, compared to GPT-3’s 175 billion). In fact, models have grown so large that there are legitimate concerns that companies are reaching the limits of the existing available text training data. However, their growing size comes with growing costs. While training costs are rarely disclosed by companies developing LLMs, OpenAI stated that developing and training GPT-4 cost more than $100 million, and Anthropic CEO Dario Amodei suggested that future training costs could exceed $1 billion. Growing CO2 emissions and water usage from training and operating chatbots are also attracting increased attention in terms of their effect on the environment. What these upward trends mean for AI chatbots’ profitability and scalability remains to be seen this year.
Major economies around the world took substantial steps to regulate AI . . .
The United States
In response to the transformative potential of AI, the U.S. government began to regulate it through administrative law, acknowledging the imperative to navigate the complexities of AI risks and begin establishing domestic standards. On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) unveiled the Artificial Intelligence Risk Management Framework (AI RMF 1.0). Developed in close collaboration with both private and public sectors, the AI RMF serves as a comprehensive tool for organizations engaging with AI technologies and is designed to adapt to the evolving AI landscape. Though the RMF is not intended to be applied as part of formal regulation, many have held the framework up as substantial progress in maturing AI governance.
Announced in June 2023 at CSIS, Senate Majority Leader Chuck Schumer’s SAFE Innovation Framework marked a strategic effort to confront the profound changes brought about by AI through “comprehensive legislation.” Since September 2023, Capitol Hill has seen over 150 AI experts gather as part of Senator Schumer’s AI Insight Forums. These forums have covered an array of crucial topics, from AI innovation and workforce considerations to national security and guarding against doomsday scenarios. Notably, the ninth forum, held on December 6, 2023, featured a testimony from the Wadhwani Center’s director, Gregory Allen. This forum focused on maximizing AI development to bolster the United States’ military capabilities, aligning with Senator Schumer’s vision for an “all-hands-on-deck” effort.
Ahead of the UK AI Safety Summit, the Biden administration announced its Executive Order on Safe, Secure and Trustworthy Artificial Intelligence in late October 2023. The order broadly focused on the development of standards and testing mechanisms for AI safety, infrastructure, and social consequences (such as discrimination and effects on labor). Since the announcement, executive agencies like the Department of Defense (DOD) and State Department have released their policies on AI and detailed how the executive order’s directives will be applied in their respective agencies.
U.S. president Joe Biden
October 30, 2023
We face a genuine inflection point in history, one of those moments where the decisions we made in the very near term are going to set the course for the next decades. And with the position we lead the world, the toughest challenges are the greatest opportunities. Look, there’s no greater change that I can think of in my life than AI presents as a potential: exploring the universe, fighting climate change, ending cancer as we know it, and so much more.
The European Union
On December 8, the European Union passed its Artificial Intelligence Act, the world’s most substantial set of regulations on AI so far. After two and a half years in the making, the act was finally agreed upon following a lengthy 37-hour negotiation between EU states and the European Parliament. EU commissioner Thierry Breton confirmed the event on X, stating that “the EU becomes the very first continent to set clear rules for the use of AI” in passing the AI Act and calling it a “launch pad for EU start-ups and researchers to lead the global AI race.” Not all EU leaders agree, however; French president Emmanuel Macron condemned the act on December 11, saying “we can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea.”
The AI Act regulates all AI sold, used, or deployed within the European Union apart from AI used for military purposes, research, and open-source models, though most provisions only apply to the “high-risk” AI systems. It advances a risk-based approach to managing AI systems by sorting levels of risk into four categories: unacceptable, high, limited, and minimal to none. Unacceptable risks banned under the act include the use of AI for manipulating human behavior, social scoring, and creating biometric databases based on sensitive social categories such as race or religion. The consequences for failing to comply with these rules are steep: under the new rules, companies could be fined €35 million or 7 percent of global revenue.
Full implementation of the new AI law is not expected to begin until 2025 at the earliest, allowing time for the European Commission to establish a regulatory oversight “AI office” in Brussels and companies to adapt to the new rules.
China
In August 2023, China’s generative AI measures went into effect. At the time, these measures were some of the most comprehensive regulations on AI and focused on a regulatory framework for generative AI services to the Chinese public. Earlier in the year, China released a draft that placed significant responsibility onto service providers on topics like ensuring the legality of the data used for training and optimization. Service providers would be met with a fine upwards of 100,000 yuan if they failed to meet these standards. However, the finalized version only requires the service providers to create measures that prioritize desired values within data training and optimization. Shifting from the strict first draft to a more diluted final draft is likely a reflection of industry input as well as sensitivity to the current economic challenges that China is facing.
The measures apply to any AI generative technology that provides services that “generate any text, image, audio, video, or other such content to the public.” Services that are not offered to the public are explicitly excluded from the legislation. Additionally, prior to providing services to the public, service providers must apply for a security assessment. In implementation, these have not been difficult to get approved. Chinese military, intelligence, and police services remain broadly exempt from all Chinese AI regulations.
In October, the Chinese government announced its Global AI Governance Initiative at the third Belt and Road forum. Xi Jinping unveiled the initiative personally, underscoring China’s AI governance ambitions, which include enhancing information exchange and technological cooperation with other countries; developing open, fair, and efficient governing mechanisms; and establishing an international institution within the UN framework to govern AI. The initiative calls for representation and equal rights when developing AI, regardless of a country’s “size, strength, or social system.” The announcement came just weeks after the creation of a BRICS AI study group which aimed to foster closer AI governance ties among the participating nations.
Chinese vice minister of science and technology Wu Zhaohui
November 1, 2023
We call for global collaboration to share knowledge and make AI technologies available to the public under open-source terms.
. . . and to strengthen international cooperation on AI governance.
Global efforts to govern AI increased dramatically in 2023. The United Kingdom hosted the first global AI Safety Summit on November 1–2, convening political and technology leaders for discussions focusing on foundation model safety. Attendees included summit host Prime Minister Rishi Sunak, European Commission president Ursula von der Leyen, Vice President Kamala Harris, prominent tech CEOs, and, to the surprise of many, China’s vice minister of science and technology, Wu Zhaohui. The most significant achievement of the summit was the Bletchley Declaration, which Sunak called a “landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI.” The declaration broadly aims to promote transparency and accountability within companies developing frontier AI models and to develop AI safety evaluation metrics, tools, and research capabilities.
AI also featured prominently in G7 meetings in 2023, including the Digital and Technology Ministers’ Meeting in April and the G7 summit in May. The group released the G7 leaders’ statement, the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. The code of conduct was the most consequential of the documents, laying out a set of voluntary guidelines for developing advanced AI systems, such as foundation models and generative AI. It is worth noting that these documents set out voluntary guidelines as opposed to binding international regulations. The G7 countries will also publish the Hiroshima AI Process Comprehensive Policy Framework.
Several other international assemblies gathered in 2023 to discuss AI for the first time. In February, the Responsible AI in the Military Domain Summit (REAIM) in The Hague, Netherlands, convened ministers to discuss the responsible use of AI for military applications. In July, the United Nations Security Council met to discuss AI risk, at which Secretary-General António Guterres called for the creation of a UN body “to support collective efforts to govern this extraordinary technology.” The urgent need for international cooperation on balancing potential risks and benefits from AI was a common theme across all these governance efforts.
What remains to be seen is how international pledges will translate into real-world impact in 2024. Last year brought many voluntary commitments and high-level declarations. This is a start. But perhaps the greater challenge will be seeing how the grittier details of legislation, funding, and international standards unfold within a window of interest that may not last forever.
The private sector emphasized its own role in responsible AI governance.
As the U.S. government made significant steps to begin regulating AI in 2023, the private companies behind AI development have become more, not less, important in pursuing this goal. The government was proactive last year in collaborating with industry leaders to responsibly manage AI. This action has come, in part, from a recognition by Congress that it is playing catch-up to a technology and industry that far predates the government’s regulatory efforts in 2023. The majority of AI research and development exists in the private sector, as it takes extremely large datasets, technical expertise, and financial investment to develop the kinds of frontier AI models Congress is seeking to regulate. It would make sense, therefore, that Congress should seek AI companies’ input on AI legislation as it does with other industry leaders in other sectors. Senate Majority Leader Chuck Schumer opened the first AI Insight Forum on September 13, which gathered senators, CEOs, and civil society leaders to discuss AI regulation, noting that “Congress cannot do it alone.” He added, “We need help of course from developers and experts who build AI systems.” The forums accompanied other congressional hearings that heard from AI CEOs like OpenAI’s Sam Altman and IBM’s Christina Montgomery last year.
In addition to congressional hearings, AI companies were given a greater responsibility to regulate their technologies in 2023 by Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The order requires AI developers conduct red teaming on their products and report their findings to the government before they are released, placing the onus on companies to do the heavy lifting when it comes to due diligence.
However, the private sector’s growing influence in AI governance is accompanied by growing concerns that there are fundamental conflicts of interest at play. AI companies’ dual role of helping to regulate their innovations as they continue to develop them raises serious questions about whether they can truly place safety over profit. As Marietje Shaake, international policy director at Stanford’s Cyber Policy Center, wrote in the Financial Times, “Imagine the chief executive of JPMorgan explaining to Congress that because financial products are too complex for lawmakers to understand, banks should decide for themselves how to prevent money laundering, enable fraud detection and set liquidity to loan ratios. He would be laughed out of the room.” Just as lawmakers are not experts in other industries for which they craft legislation, she and others argue, they must be careful to not be captured by AI CEOs who use technical complexity to advance their own regulatory interests.
The number of AI-related lawsuits rose sharply.
New AI tools began to test existing legal frameworks in 2023, cutting across domains such as data privacy, copyright, and patent law. So far, the leading cause for lawsuits related to AI is related to data privacy, perhaps unsurprising considering the billions of data points that generative AI models are trained on from across the internet. AI companies have been resistant to revealing their training data thus far, despite long-running calls for greater transparency and concerns about user privacy from academics and civil society groups. A reported hack of ChatGPT in early December highlighted some basis for these concerns: when asked to repeat certain words like “poem” ad infinitum, ChatGPT eventually began to spit out sensitive training data, including phone numbers, names, and addresses. OpenAI has since closed this loophole by restricting ChatGPT from repeating words forever. It also announced in August that website owners can now block the company’s data-scraping web crawler, GPTBot, from accessing their pages and data for training purposes.
OpenAI has been one of many AI companies to face several lawsuits in 2023 due to alleged privacy and intellectual property violations. One class action lawsuit against OpenAI and Microsoft made headlines in June 2023 when it claimed that ChatGPT stole millions of sensitive data from hundreds of millions of internet users during training, including from social media accounts, medical records, and personal accounts. Though the complaint was dismissed in court, it was not the only one of its kind; a second lawsuit was filed against the same two companies in September and another, nearly identical, lawsuit was filed against Google parent company Alphabet in July. As of the time of writing, both cases are ongoing, though all three companies have moved to dismiss them in court.
The implications of AI for copyright law have also come under greater scrutiny in the last year. In 2023, Getty Images filed lawsuits in the United States and the United Kingdom against generative AI company Stability AI for training its model, Stable Diffusion, on copyrighted data and metadata. OpenAI, Alphabet, and Microsoft faced multiple class action cases from authors whose work, they claim, has been similarly used for training purposes without proper licensing or compensation. Text-to-image AI generators like Midjourney, DreamStudio, and DreamUp were the subject of a similar lawsuit filed in January by visual artists whose work has been used and replicated without permission.
Several companies, such as Meta, Microsoft, and Google, argued they should not have to pay to train AI models on copyrighted work, citing arguments such as AI training “is like the act of reading a book” and curbing access to copyrighted access would chill AI development. These kinds of arguments ask fundamental questions about AI regulation, like whether the unique characteristics of the technology should exempt it from certain laws.
Finally, the U.S. Federal Court Circuit upheld a precedent in patent law when it ruled in August that AI systems are not eligible to own patents for their “inventions” as they are not human beings. The verdict confirmed earlier rulings made by the U.S. Patent Office and the U.S. Copyright Office following years of legal disputes by U.S. computer scientist Stephen Thaler, who first tried to copyright an image produced by his AI system in 2019. Federal Circuit Judge Leonard Stark announced the decision in August saying “there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings.” The ruling reflects similar decisions made in the United Kingdom and the European Union.
The Department of Defense took steps to safely adopt and deploy AI in its weapons systems . . .
The United States was the first country to codify a policy on autonomy in weapon systems when it first adopted DOD Directive 3000.09 in 2012. The policy did not ban the development or use of autonomous weapons — indeed many types of autonomous weapons, such as some missile defense and cyber weapons, had already been in use for decades. What 3000.09 did was place new policy and process requirements for the development and use of autonomous weapons in offensive and kinetic constructs. However, the policy was widely misunderstood as requiring “a human in the loop” and thus banning fully autonomous systems, which it did not do. In January 2023, the DOD published an updated 3000.09 that sought to address the confusion and to account for the rise in machine learning AI systems. Among other things, it formalized that adherence to the Department of Defense (DOD)’s AI Ethical Principles was a requirement at all stages of development and fielding. Reaffirmed by Deputy Secretary of Defense Kathleen Hicks, the directive mandates rigorous testing, reviews, and senior-level scrutiny for autonomous systems, aligning with the DOD’s Ethical Principles and the Responsible AI (RAI) Strategy.
On the international front, the U.S. Bureau of Arms Control, Deterrence, and Stability unveiled a ground-breaking “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” This declaration is fully consistent with the ideas underpinning DOD 3000.09 and seeks to make them international, particularly among U.S. allies. Introduced at the Responsible AI in the Military Domain Summit, this initiative has since garnered signatures of support from 49 countries, promoting non-legally-binding guidelines for secure AI deployment in defense contexts.
. . . and to massively accelerate the DOD’s adoption of AI-enabled autonomous systems.
On August 28, 2023, Deputy Secretary of Defense Kathleen Hicks announced the Replicator initiative. Replicator aims to “field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months.” The DOD intends for the initiative to counter China’s perceived advantage in military “mass” by rapidly acquiring and fielding large quantities of small, attritable, and relatively cheap drones. Reports suggest that this initiative was informed by analysis of the war in Ukraine.
Replicator will not solely focus on acquiring such systems; it will also build the processes that allow for “replicating the rapid adoption and delivery of technology.” This line of effort will try and build a pipeline that will enable similar efforts in the future. The Defense Innovation Unit (DIU) will play a central role in Replicator, both working with Indo-Pacific Command (INDOPACOM) end users to establish what capabilities the warfighter needs and exploring which industry partners can provide those capabilities within the relevant time frame. At least for now, the DOD is not requesting any additional funding for Replicator; instead, the initiative will take advantage of “existing funding, existing programming lines, and existing authorities.” DOD officials claim the first Replicator systems will be delivered sometime between February and August 2025.
Corporate interest in AI surged, but the “Magnificent Seven” AI tech giants continued to capture stock markets and global attention.
AI was the magic word of 2023 for firms worldwide. Business intelligence company CB Insights reported that private investment in generative AI reached an all-time high last year with an almost 450 percent increase from $3.2 billion in 2022 to $17.4 billion by Q3 2023. As of March 2023, mentions of AI in executives’ earnings calls — where CEOs discuss corporate performance and strategies with investors and leading analysts — had grown by 77 percent compared to 2022. By September, that figure accelerated to a further 366 percent.
EY’s October CEO Outlook Pulse Survey, which interviews 1,200 CEOs globally every quarter, revealed that nearly all CEOs (99 percent) are conscious of AI’s potential to disrupt current business models and are planning investments into generative AI, either by redirecting capital from other projects (69 percent) or by raising new capital (23 percent). However, the rapid change of the AI ecosystem is a significant barrier to companies’ adoption of the technology: 26 percent stated that the breakneck speed in which AI is developing makes investments risky, while another 66 percent stated that the surrounding hype makes it difficult to ascertain whether other companies have legitimate AI capabilities for partnerships and mergers and acquisitions. To what extent companies effectively adopt AI tools into their business models once the hype settles remains to be seen in 2024.
In contrast, leading technology companies saw enormous returns in 2023. The “Magnificent Seven” (a term given to Meta, Amazon, Apple, Microsoft, Google, Tesla, and Nvidia) continued to dominate the stock market last year, breaking the record for holding the largest proportion of the S&P 500’s market cap at 29 percent. Goldman Sachs reported that the seven companies’ stocks accounted for 71 percent of total returns in 2023 while all other 493 other stocks made up just 6 percent. While comparably smaller AI companies have still attracted a surge in investment — including 15 new AI unicorns in 2023 as of Q3 — the lion’s share of attention in AI was focused on tech giants in 2023.
29%
The Magnificent Seven’s record-breaking share of the S&P 500’s market cap in 2023.
71%
The Magnificent Seven’s share in the S&P 500’s total returns in 2023.
450%
The increase in mentions of AI in executives’ earnings calls as of early 2023.
Rapid technological progress showed no signs of slowing down.
2023 was a groundbreaking year for AI, in terms of both raw model capability and the technology’s capacity to deliver remarkable breakthroughs in an increasing number of scientific fields. The end of the year saw a leap in generative AI capabilities as Google released its new multimodal system, Gemini, in December, possibly foreshadowing a new era for LLMs. “Multi-modal” describes models that can process data from a variety of inputs, such as text, video, image, or audio, and can produce outputs in a similar array of forms. Unlike other LLMs like OpenAI’s ChatGPT-4, Gemini is natively multimodal, meaning it was trained on diverse inputs.
Last year also brought significant breakthroughs in AI chip capabilities. Nvidia reached a technological milestone in May 2023 when it unveiled the world’s first 100-terabyte GPU memory system in its new semiconductor model DGX GH200. The new chip is a significant advancement compared to the company’s earlier models, particularly the DGX A100, and it provides over 500 times more memory to the GPU shared memory programming model compared to a single DGX A100 320 GB system. These advanced chips open the potential for faster, more efficient AI models and computing systems — advances to look forward to in 2024.
Finally, AI continued to revolutionize an increasingly diverse number of scientific fields. In 2023, AI models from Google DeepMind made significant breakthroughs in both material science, predicting “ingredients and properties of another 2.2 million materials,” and meteorology, with its GraphCast model. Earlier in the year, Huawei published a similar model called Pangu-Weather. These breakthroughs could reveal new materials and methods to construct novel batteries, superconductors, and catalysts.
2023 MAPPING AI EVENTS
A Global Perspective on Key AI Summits and Meetings
United States
Jan 31, Washington, D.C.
The White House launches the U.S.-India Initiative on Critical and Emerging Technologies (iCET), a partnership with India to compete with China in semiconductor chips, military equipment, and AI.
Sep 13, Washington, D.C.
The first AI Insight Forum session is held on Capitol Hill. Led by Senate Majority Leader Chuck Schumer, the meeting convenes senators, prominent tech CEOs, and civil society figures to discuss U.S. government oversight of AI.
Jul 18, New York City
The UN Security Council convenes to discuss AI risks for first time.
United Kingdom
Nov 1-2, Bletchley Park
The United Kingdom’s AI Safety Summit convenes political and technology leaders at Bletchley Park.
Belgium
Dec 8, Brussels
In a global first, the European Union establishes a landmark comprehensive AI regulation by passing the EU AI Act.
Ukraine
Jan 3
Ukraine’s digital transformation minister, Mykhailo Fedorov, states that there is “potential” for introducing fully autonomous killer drones into combat with Russia in the next six months. Though no confirmed evidence of fully autonomous weapons use follows, AI features prominently in the war.
India
New Delhi
Joint initiative by the United States and India.
Bangladesh
Dec 13, Dhaka
Generative AI is widely used by multiple political parties in the lead-up to Bangladesh’s 2024 elections as deepfakes and AI-generated misinformation circulate on social media and news outlets.
China
Oct 18, Beijing
China announces its Global AI Governance Initiative at the 2023 Belt and Road Forum.
Japan
Apr 30, Takasaki
The G7 concludes its Digital and Tech Ministers’ Meeting in Takasaki, Japan, declaring member states’ commitment to an internationally cooperative, interoperable, and risk-based approach to AI governance.
May 19-21, Hiroshima
G7 leaders gather in Hiroshima to discuss inclusive governance for AI at the 2023 summit.
Uruguay
Mar 10, Montevideo
The Montevideo Declaration on Artificial Intelligence is announced at the 2023 Latin American Meeting on Artificial Intelligence (March 6–10), urging governments and companies to safely develop AI and AI regulation for Latin American countries.
Rwanda
Feb 27, Kigali
The African Union High-Level Panel on Emerging Technologies and the African Union Development Agency convene AI experts to finalize the drafting of the African Union AI Continental Strategy for Africa.
2024 THE YEAR AHEAD
Ten Developments to Monitor in 2024
01
How effectively can high-level global AI governance talks translate into tangible impact?
Global AI governance talks resulted in significant high-level commitments in 2023, from the Bletchley Declaration signed at the United Kingdom’s AI Safety Summit to the Hiroshima AI Process set in motion under the Japanese G7 presidency last year. How will such commitments translate into actionable policies and enforceable regulations in 2024?
02
How will third-party red teaming work in practice?
Biden’s AI executive order requires that AI developers evaluate their frontier models through a practice called red teaming, in which adversarial attacks are simulated to assess the vulnerabilities and robustness of a model. Developers will be required to report their findings to the government before the model is released to the public. How will third-party red teaming work in practice, and will the government be able to use the findings to keep up with the growing size and capabilities of LLMs?
03
Can Congress pass comprehensive AI legislation?
President Biden’s AI executive order was a step toward U.S. AI regulation, but its efficacy depends upon Congress passing legislation and budget allowances. Will AI regulation remain a largely bipartisan issue, and how quickly can it pass through Congress given the expected tough timing with 2024 elections?
04
How will the Italian G7 presidency meaningfully build upon the Hiroshima AI Process this year?
In 2023, the Japanese presidency put AI on the G7 agenda and committed to coordinating AI governance efforts under the Hiroshima AI Process. Interoperability between governance frameworks is a daunting yet essential task for avoiding a fragmented global AI landscape, and one that Italy has signaled it will pursue this year. What steps will the Italian presidency take to move the Hiroshima AI Process forward and to deliver the G7’s commitment to harmonized AI regulation?
05
Will scaling up AI continue to deliver new capability breakthroughs?
The substantial scaling up of LLMs produced dramatic performance improvements in 2023. Will improvements continue to grow exponentially this year, or will developers hit diminishing returns without making fundamental architectural improvements?
06
Will the DOD’s Replicator initiative get the funding it needs?
The DOD’s Replicator initiative is a comprehensive effort by the United States to accelerate the delivery of AI-enabled autonomous systems to warfighters at speed and scale. The program aims to address strategic competition with China, a subject of critical importance. What will happen if the DOD receives only token funding?
07
How will U.S. and allied export controls affect China’s progress in AI and semiconductors?
Huawei’s late August release of the Mate60 Pro raised serious questions about the enforceability of U.S. and allied export controls and their impact on China’s technological trajectory. Can the United States and its allies effectively enforce export restrictions, and how might this influence China’s technological trajectory in 2024?
08
How will AI impact major elections this year?
2024 is the biggest election year in history, with over half of the world’s population heading to the polls. It will be the first year that political parties must define their stance regarding AI, and some voters may see AI regulation featured as a question on their ballots. Both would have been almost unthinkable only a year ago. It will also be the first election cycle in which AI will likely play a significant role in election campaigning, interference, and spreading disinformation. How might the world’s busiest election year fare with AI in the mix?
09
Will open-source AI models continue to be available at leading-edge performance?
The open- versus closed-source debate largely revolves around whether AI development should prioritize transparency, collaboration, and accessibility (open source) or proprietary control, safety, competitive advantage, and intellectual property protection (closed source). Who will emerge victorious in this debate and what implications could this have for AI’s future development, regulation, and democratization?
10
Will tech giants continue to dominate AI development?
The AI landscape is currently dominated by a handful of key players like Google, OpenAI, and Meta. Will tech giants continue to dominate AI development and market returns in 2024, or will we see a more diversified landscape with winners emerging in niche industries?
GLOSSARY
Key Definitions for 2024
Agent
A system that embodies AI and can perceive its environment, make decisions, and act to achieve specific objectives. Agents are used in the field of reinforcement learning; for example, a self-driving car.
AI chatbot
A computer program that uses AI to converse, inform, and assist users in a human-like way; for example, Siri and ChatGPT. These may or may not use large language models as the source of their capabilities.
Alignment
The goal of designing AI to act in accordance with human values and desired outcomes.
Artificial general intelligence (AGI)
A theorized AI technology that exceeds human abilities in a broad range of intellectual fields and can learn flexibly.
Artificial intelligence (AI)
Computer systems that can perform tasks that mimic human intelligence; for example, learning from experience (machine learning), understanding natural language, and recognizing patterns.
Automation
The use of technology to perform tasks without human intervention.
Deep learning
A subset of machine learning where artificial neural networks with many layers are trained to perform tasks by learning patterns from large amounts of data.
Diffusion
A method by which generative models simulate the gradual evolution of images, learning, and applying statistical patterns to generate new, complex visual content.
Emergent capabilities
Unexpected functionalities or behaviors that arise from the interaction and complexity of an AI system’s components; for example, an AI trained to play a video game may discover an unconventional route not explicitly taught during training.
Existential risk
The potential for AI systems to pose catastrophic and irreversible threats to the trajectory of human civilization, especially related to extinction of the human species.
Explainability
The ability to understand how an AI system makes decisions; closely linked to transparency and accountability.
Foundation model
A pre-trained, generalized AI model that can be built upon to create more specialized models; for example, Google’s BERT and OpenAI’s GPT series.
Generative AI
AI systems that generate new content such as image, text, audio, and video.
Generative Pre-trained Transformer (GPT)
A type of large language model developed by OpenAI that is trained on internet data to process and generate text; GPTs can perform a variety of natural language tasks such as writing code, generating images, and conversing in human-like dialogue.
Graphics Processing Unit (GPU)
A specialized electronic circuit designed to accelerate the processing of graphics and parallel computing tasks; GPUs are often used to enhance the performance of deep learning models by facilitating multiple calculations simultaneously. Though originally the same GPU chips were used for computer graphics and AI applications, more recently chips have been introduced that are specifically targeting AI applications, leading some to refer to them as AI chips rather than GPUs.
Hallucination
The generation of inaccurate information by a model; often results from overfitting or exposure to biased data.
Hype
Sensationalized claims surrounding the capabilities and impact of AI, which can lead to inflated public perceptions that may not align with the current state of AI; the term is used to caution against unrealistic expectations.
Large language model (LLM)
A type of AI model that has been trained on large amounts of text data to understand and generate human-like language; for example, Baidu’s Ernie Bot and Anthropic’s Claude.
Natural language processing (NLP)
A subfield of AI focused on training computers to understand, interpret, and generate human language.
Neural network
A computational model inspired by the structure of the human brain, consisting of interconnected nodes (neurons) that work together to process and analyze data.
Open sourcing
Making the source code and development details of a software project publicly available; this allows others to contribute, modify, and use the code in their own projects.
Prompt
A specific instruction given to an AI model to generate a desired output, guiding the model’s behavior and responses.
Red teaming
A method where a “red team” (traditionally security engineers) simulates adversarial attacks and challenges to evaluate the security, robustness, and vulnerabilities of AI systems by trying to make the system produce undesired outcomes.
Reinforcement learning
A type of machine learning where an agent learns to make decisions by receiving feedback (rewards or penalties) on its actions.
Supervised learning
A type of machine learning where an AI model is trained on a labeled dataset; the model learns to map inputs to corresponding outputs and generalizes this mapping to make predictions on new data.
Transformer
A state-of-the-art neural network architecture used in advanced machine learning research. Transformers use a technique called “self-attention” to quickly learn the contextual relationship between data points (e.g., words), allowing it to generate more accurate predictions faster. Initially developed for natural language processing, they are now used for a variety of applications such as computer vision, preventing fraud, or drug discovery.
Unsupervised learning
A type of machine learning where a model is given training input data without explicit labels. The algorithm explores and finds patterns on its own, and the correctness of the model is often determined by how well it achieves its intended purpose.
Gregory C. Allen is the director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS). Mr. Allen’s expertise and professional experience spans AI, robotics, semiconductors, space technology, and national security.
Georgia Adamson is a research assistant with the Wadhwani Center for AI and Advanced Technologies at CSIS. She supports research on the geopolitical and national security implications of AI and semiconductors.