7 things a science journalists learned about AI and cognitive science while writing her books | Continue reading
One of the most important issues in contemporary societies is the impact of automation on human work. With AI becoming ever more advanced, what will its impact on automation be? | Continue reading
MLCommons.org introduces two new public datasets for speech recognition. The People’s Speech is the first large-scale, permissively licensed ASR dataset that includes diverse speech and environments. | Continue reading
How do humans transfer their knowledge and skills to artificial decision-making agents more efficiently? What kind of knowledge and skills should humans provide and in what format? | Continue reading
An interview with Upol Ehsan on Human-Centered Explainable AI and Expanding Explainability for AI | Continue reading
Interpretability makes black-box models less opaque and imbues it with explanatory power to explain its predictions and decisions to its users. How can it be done for AI? | Continue reading
On why AI must be able to compose these models dynamically to generate combinatorial representations. Dr. Jonathan Mugan is a principal scientist at DeUmbra and is the author of The Curiosity Cycle. | Continue reading
This piece was originally published on the Stanford CRFM and Stanford HAI blogs. R ecently, we released our report on foundation models, launched the Stanford Center for Research on Foundation Models (CRFM) as part of the Stanford Institute for Human-Centered AI (HAI), and hoste … | Continue reading
Introduction AI systems are compute-intensive: the AI lifecycle often requires long-running training jobs, hyperparameter searches, inference jobs, and other costly computations. They also require massive amounts of data that might be moved over the wire, and require specialized … | Continue reading
Over a hundred researchers at Stanford think so. We are not so sure. | Continue reading
A primer on automated story generation and how it it strikes at some fundamental research questions in artificial intelligence. | Continue reading
On the field of Machine Learning Systems and how it addresses the new challenges of ML with a lens shaped by traditional systems research | Continue reading
An argument that it is time to re-think our approach to natural language understanding, since the ‘big data’ approach to NLU is implausible and both technically and theoretically flawed.. | Continue reading
On how the creation, development, and deployment of machine translation technology is historically entangled with practices of surveillance and governance | Continue reading
In the first year of my PhD, I started trauma recovery therapy to heal from domestic violence. It mostly consisted of something called “reprocessing sessions''; using a technique called EMDR we would revisit traumatic memories in my life, and try to figure out what beliefs I had … | Continue reading
Machine learning models are ubiquitous in our lives. Are they fair, transparent, and just in their decision-making processes? | Continue reading
A review of recent advances in prompts. | Continue reading
On new multi-task learning methods that automatically learn what to learn together | Continue reading
Will self-driving vehicles actually be safer than human drivers? There are good reasons to believe that some types of autonomous vehicles may not be capable of handling edge cases safely. | Continue reading
We’re excited to announce the 2021 Gradient Prize! TLDR: Submit full, finished pieces between June 1 and September 1, 2021 for a chance to win a total of $1000 in prizes — we can’t wait to see what you come up with! To enter the contest, just fill out | Continue reading
Numerous research groups have helped in the quest to contain the pandemic. This piece reviews where these efforts have been realized as practical solutions, i.e. being used by corporations, governments, or individuals. | Continue reading
On why explainability is an inherently human-centered problem where we cannot afford a technocentric view | Continue reading
The unprecedented interest, investment, and deployment of machine learning across many aspects of our lives in the past decade has come with a cost. Although there has been some movement towards moderating machine learning where it has been genuinely harmful, it’s becoming increa … | Continue reading
Three years ago today, the Gradient published its first set of articles. Since then, we’ve had the privilege of working with dozens of exceptional AI researchers and practitioners and sharing their work with hundreds of thousands of readers all across the globe. We’ve been especi … | Continue reading
The unprecedented interest, investment, and deployment of machine learning across many aspects of our lives in the past decade has come with a cost. Although there has been some movement towards moderating machine learning where it has been genuinely harmful, it’s becoming incre … | Continue reading
Some objects grab our attention when we see them, even when we are not exactly looking for them. How precisely does this happen? And, more importantly, how can we incorporate this phenomena to improve our computer vision models? | Continue reading
The potential of machine learning has grown significantly over the last decade following the improvements in computational power. However, to achieve accurate machine learning solutions, we need both complex architectures and enough data to feed it. Centralized solutions, where d … | Continue reading
Digital harassment is a problem in many corners of the internet, like internet forums, comment sections and game chat. In this article you can play with techniques to automatically detect users that misbehave, preferably as early in the conversation as possible. | Continue reading
Will “Robotic judges that can determine guilt will be ‘commonplace’ within 50 years”? Should that be the case? | Continue reading
How state-of-the-art methods for interpreting neural networks have evolved over the past decade | Continue reading
Exponential compute power shrouded by decoherence and cryogenic requirements—our quest to wield quantum computers is a tale of David and the Goliath. | Continue reading
Dr. Timnit Gebru's contributions range from circuit design at Apple to computer vision research at Stanford to her global leadership in AI Ethics | Continue reading
A broad overview of the sub-field of machine learning interpretability; conceptual frameworks, existing research, and future directions. | Continue reading
With so much AI research, and so many poor reviews for this research, how can the community work to improve things? | Continue reading
Artificial neural networks and deep learning have taken center stage as the tools of choice for many contemporary machine learning practitioners and researchers. But there are many cases where you need something more powerful than basic statistical analysis, yet not as complex or … | Continue reading
Why are some problems in medical image analysis harder than others for AI, and what can we do about them? In a landmark paper [1], Alan Turing proposed a test to evaluate the intelligence of a computer. This test, later aptly named the Turing Test, describes a person interacting … | Continue reading
As the field of machine learning has become ever more popular, a litany of online courses has emerged claiming to teach the skills necessary to “build a career in AI”. Is that what they do, though? | Continue reading
Modern technology has enabled the development and deployment of the pervasive and precise surveillance that may be slipping out of the control of the public. | Continue reading
What does Microsoft getting an "exclusive license" to GPT-3 mean for the future of AI democratization? | Continue reading
My engineering friends often ask me: deep learning on graphs sounds great, but are there any real applications? While Graph Neural Networks are used in recommendation systems at Pinterest, Alibaba and Twitter, a more subtle success story is the Transformer architecture, which ha … | Continue reading
On unifying many of deep learning’s problems and with the concepts of "shortcuts", and what we can do to better understand and mitigate shortcut learning. | Continue reading
The ability to understand and generate novel sentences is the primary hallmark of human languages. Using finite means, a competent speaker can process an infinite number of signals which are themselves at the basis of fundamental human activities, from communicating complex instr … | Continue reading
On how comparison studies between AI and humans be fruitful -- if care is taken. | Continue reading
A recent Computer Vision paper and demo prompted a lot of discussion due to demonstrations of its bias, which is summarized here and reflected upon | Continue reading
A practitioner's view on the problems with conventional STT research in industry and academia. | Continue reading
First CV, and then NLP, have had their 'ImageNet moment' — a technical shift that makes tackling many problems much easier. Could Speech-To-Text be next? | Continue reading