There’s a faster, cheaper way to train large language models
Large language models have been dominated by big tech companies because they require extensive, expensive pretraining. Enter Sophia, a new optimization method developed by Stanford computer scientists.
Analysis finds history textbooks misrepresent the scientific consensus around climate change
A new AI-driven analysis finds the most popular U.S. history textbooks used in California and Texas commonly misrepresent the scientific consensus around climate change.
There’s a problem with making technology genderless
There’s a push to make tech genderless to avoid perpetuating stereotypes, but research shows gender is one of the fundamental ways we connect with objects.
A generative search engine is supposed to respond to queries using content extracted from top web search hits, but there’s no easy way to know when it’s just making things up.
Why GPT detectors aren’t a solution to the AI cheating problem
At least seven algorithms promise to expose AI-written prose, but there’s one problem: They’re especially unreliable when the author is not a native English speaker.