OpenAi proposes regulatory body

Meta learns 1000 languages

Keeping it short and sweet today with two major stories coming out of Open AI and Meta. One company has just put in a recommendation for an international regulatory body and the other has come out with a game changing speech to text model. Lets dig in !

As always if you enjoy reading our posts be sure to spread the word !

Here’s what we have lined up for you today -

  1. Open AI has proposed an international regulatory body

  2. Meta understands 1000 languages

  3. Adobe adds generative AI feature to photoshop

Every important technology is regulated. Synthetic biology, nuclear energy, drug development. Industries where real human lives are at stake are naturally subject to heavy oversight by public policy.

The founders of Open AI are now calling for the forming of an AI agency that would monitor and mitigate the existential risks posed by AI. Sam Altman draws a parallel with the International Atomic Energy Agency and a potential global AI regulatory body.

The body will ideally have the responsibility of

  • Conducting audits

  • Restricting deployment of potentially harmful AI

  • Inspecting systems

  • Tracking server and energy usage

Open AI believes that this body would be more exclusively focused on the bigger players in the AI domain instead of clamping down smaller upstarts that might not have the resources to strictly follow these rigid guidelines.

Oversight is important, and the existence of a regulatory body makes complete sense. There are however a few parties that believe that OpenAI’s push for regulation is more so a self-serving decision to solidify their monopoly in the AI domain by being the stewards of drafting it in the first place.

Speech recognition systems are capable of understanding the 100s of official languages. Well, Meta just released a model that recognises over 3000 languages and is able to fully understand and perform speech to text on about 1100 of these spoken languages.

The data for the model was trained on a massive corpus of New Testament text and speech data form the bible.

Interestingly, even though the data is trained on religious text, the model performs well on general text as well. So much so that it has half the number of errors when compared to Open AI’s whisper model and covers over 11x languages.

The code is open source on Github that should allow developers to build additional tools like -

  • Auto subtitles

  • Translation tools

  • Communication services

More languages will be added to the model over time and who knows, maybe it might invent its own language as well.

Nonetheless, Meta has been on a tear recently and their FAIR initiative probably has a lot more in the tank to share.

Around the industry -

Generative fill by Adobe

Adobe has brought its first generative AI feature directly into photoshop !