The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
|
How it started vs how it’s been going recently… |
Originally founded as a non-profit that would promote openness in AI research while creating safe AI aligned with human values, OpenAI has been candid about the risks and vulnerabilities associated with using ChatGPT, going so far as to suggest their efforts should be regulated and, in the future, independently audited. The contrast has been causing a number of commentators, including at least one former investor, some “angst”. |
|
|
In this newsletter, you’ll find:
|
|
Top Articles |
Stephen Wolfram explores the broader picture of what’s going on inside ChatGPT and why it produces meaningful text. Discusses models, training neural nets, embeddings, tokens, transformers, language syntax. |
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity. On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right. |
Elon Musk has approached artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT, the high-profile chatbot made by the startup OpenAI. |
Common methods to remove ML bias fail to correct for “social norm bias.” There’s a better way to make algorithms more fair in dealing with race and gender. |
During the Supreme Court’s Google v. Gonzalez hearing, justice Neil Gorsuch touched upon potential liability for generative AI output. |
Chinese regulators have reportedly told the country’s tech giants not to offer access to AI chatbot ChatGPT over fears the tool will give “uncensored replies” to politically sensitive questions. |
California policymakers are taking a crack this year at regulating the use of artificial intelligence given its growing prominence in everyday life, from teenagers using chatbots to help with homework to employers filtering prospective job seekers. |
Microsoft announced the new AI-powered Bing: a search interface that incorporates a language model powered chatbot that can run searches for you and summarize the results, plus do all of the other fun things like gaslighting and threatening users. |
Large law firms are using a tool made by OpenAI to research and write legal documents. What could go wrong? |
Hessian Digital Minister Prof. Dr. Kristina Sinemus and VDE President Alf Henryk Wulf open Hub in Frankfurt. Partners intend to improve quality of artificial intelligence to increase competitiveness of AI products and reduce risks. |
There’s a better way to make algorithms more fair in dealing with race and gender. |
The widespread use of LLMs is coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which are adaptable to a wide range of downstream tasks. To help bridge that gap, Oxford offers three contributions in this article. |
|
Top Tweets |
|
|
/GEN |
![]() |
Futuristic digital art of competing CEOs building the same thing / DALL-E
|
|