Anthropic Now Lets Kids Use its AI Technology With Safety Measures
Minors will be able to use third-party apps with Anthropic's AI models only if specific safety measures are put in place.
Teens and preteens can now use third-party apps powered by Anthropic's AI models, as the AI company has announced that it is changing its policies to allow minors to use its generative AI models as long as certain measures are put in place.
According to Anthropic, minors will be able to use third-party apps with its AI models only if developers of these apps implement specific safety features and disclose to users which Anthropic technologies they are leveraging.
To ensure that its policies are strictly adhered to, Anthropic went the extra mile to list several safety measures that developers must take note of. Some of them include age verification systems, content moderation and filtering, and educational resources on “safe and responsible” AI use for minors.
The company also confirmed that it has set up a "child safety system prompt" that would tailor AI product experiences for minors, which developers are mandated to implement in their apps if they want to make them available for kids.
Anthropic understands that in our digital age, children are beginning to make more use of AI tools for schoolwork and personal issues, which is why it is reviewing its policies.
Also, it plans to maintain healthy competition with its rivals, such as OpenAI, which is already conducting a study on child safety and announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines, and Google, which recently made its chatbot, Gemini, available to teens in English in selected regions.
With 48% of high school students in the U.S. using popular AI tools for one task or the other, according to a report released last year by ACT, it is only a matter of time before the use of AI tools by kids becomes globally accepted.