Skip to main content

Artificial intelligence

Stanford HAI —

Coding art

A new tool powered by a large language model makes it easier for generative artists to create and edit with precision.

Read More
Stanford HAI —

The problem of pediatric data

Medical algorithms trained on adult data may be unreliable for evaluating young patients. But children’s records present complex quandaries for AI, especially around equity and consent.

Read More
Stanford HAI —

AI uncovers bias in dermatology training tools

A model trained on thousands of images in medical textbooks and journal articles found that dark skin tones are underrepresented in materials that teach doctors to recognize disease.

Read More
Stanford HAI —

Why ethics teams can’t fix tech

New research suggests that tech industry ethics teams lack resources and authority, making their effectiveness spotty at best.

Read More
Stanford HAI —

ChatGPT outscores med students on clinical exam questions

Will AI’s ability to analyze medical text and offer diagnoses force us to rethink how we educate doctors?

Read More
Stanford Graduate School of Business —

AI can coach you to lose weight, but a human touch still helps

AI-powered weight loss coaching works, but it’s more effective when users also interact with real people. Empathy could be the key, researchers say.

Read More
Stanford HAI —

AI’s moonshot moment

Stanford HAI leaders urged investment and leadership to unlock AI’s potential during a recent meeting with President Biden.

Read More
Stanford HAI —

There’s a faster, cheaper way to train large language models

Large language models have been dominated by big tech companies because they require extensive, expensive pretraining. Enter Sophia, a new optimization method developed by Stanford computer scientists.

Read More
Stanford HAI —

A blueprint for using AI in psychotherapy

A working paper proposes a three-stage process, similar to autonomous vehicle development, for responsibly integrating AI into psychotherapy.

Read More
Stanford HAI —

New tool reveals language models’ political bias

A new tool finds that popular large language models have a decided bias on hot-button topics that may be out of step with popular opinion.

Read More