Discord joins the AI Party

And Anthropic loves AI Safety

Welcome to The Alignment. While we exclusively discuss AI on this newsletter it's important to acknowledge the roller coaster of a weekend Silicon Valley went through with the collapse of Silicon Valley Bank. Thankfully it seems as though the Feds have provided reassurance of doing right by the depositors (many of which are small businesses) who have money deposited with the bank.

Now, back to regularly scheduled programming and as always if you enjoy reading our posts be sure to spread the word !

Here’s what we have lined up for you today -

  1. Discord joins the AI party

  2. Anthropic releases an in depth blog post about AI Safety

  3. White-boarding and AI are a good match

  4. Animators love AI

Discord like many others has announced it’s partnership with OpenAI to augment their bots with ChatGPT like capabilities. Discord is a platform that relies heavily on conversational bots to facilitate communication on Discord communities. The platform claims that more than 3 million Discord servers already use an AI experience and 10% of all new users signing up on the platform are doing so specifically for interest in AI based communities.

The AI upgrades will be made on 3 fronts -

  1. Clyde - Discord’s resident bot will be augmented by ChatGPT.

  2. Automod - Discord’s moderator product will also use AI models to auto moderate discord servers.

  3. Conversation Summaries - Users that have missed messages will now be able to catch up by having access to discussion summaries of messages that they missed.

Discord also announced their AI incubator to help developers and startups building AI on Discord. There are already a number of well capitalised startups in the Generative AI space that use Discord to interface with their customers, namely Midjourney and Lexica.

We haven’t seen a public product release from Anthropic yet, but the startup has been making noise for being Google’s venture bet to bring AGI to the masses. One thing that the company has been doing is over-communicating their work on AI Safety.

The long essay (which is worth a read) highlights Anthropic’s core views on AI safety and the need to align AI systems. The folks at Anthropic discuss several key principles that they believe should guide research and development in AI safety. These include transparency, robustness, alignment, and coordination.

The core argument in the essay states that AI systems should be designed in such a way that their decision-making processes are transparent and understandable to humans. Additionally, they should be robust to errors and uncertainties, and their objectives should be aligned with human values and goals. The authors also argue that coordination is important in AI safety, both between different AI systems and between humans and AI systems. The author suggests that researchers and policymakers should work together to develop regulations and standards for AI safety.

At this stage it looks as if Anthropic seems to also have the models that its competitors are busy deploying in the market. However it’s choosing to release them in a limited capacity in an attempt to minimise any risk associated with them. Mostly because they feel that any risks associated with these AI systems are possibly existential and need to be controlled very carefully before being let loose in the open.

Around the industry -

Product Watch Miro AI -

Everyone’s beloved whiteboard tool got a massive AI upgrade.

Don't be afraid, be happy

When veteran Disney animators are excited, mind blown and optimistic about AI generated animation we know things are moving in the right direction.