The Institute For Experiential Ai Northeastern University News
Subscribe to our In the AI Loop newsletter. We are a research institute advancing Responsible AI solutions, solving high-impact challenges, and pioneering experiential AI education. Experiential AI is simply AI with a human-in-the-loop. World class experts solving the biggest challenges in Life Sciences, Health, Climate, Responsible AI, and Generative AI. Our 90+ faculty members from across Northeastern University push the frontiers of technology and solve problems. Helping organizations navigate the complex and emerging challenges of AI.
Applying the latest techniques from the field of artificial intelligence to solve real-world problems. The Center on Responsible Artificial Intelligence and Governance unites higher ed and AI leaders like Meta, bringing academic rigor to real-world industry problems. The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change. The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor... From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been done before.
“Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws. So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.” In addition to Northeastern, faculty from Ohio State University, Baylor University and Rutgers University form the core of CRAIG’s research arm. Meta, Nationwide, Honda Research, CISCO, Worthington Steel and Bread Financial are already involved on the industry side, with more partners being brought into the center.
Ideally, artificial intelligence should make going to the doctor an easier and less stressful experience. “If AI is working the way that we envision it, you actually won’t notice a lot of direct impact,” says Sam Scarpino, the AI+Life Sciences director at Northeastern University’s Institute for Experiential AI. “What you’ll notice is that more pharmaceuticals are coming to market for treating increasingly rare diseases. You’ll notice more time with your physician. We will be catching cancer earlier, when it’s more treatable.” In a best-case scenario, AI should run in the background “the same way that a good car runs smoothly without you thinking about what’s under the hood or what is keeping the wheels moving,”...
Yet the reality is that for as much progress we have made in AI — with its uses to develop drugs, help diagnose cancers and optimize medical data management — it is not used... There are still many challenges to overcome in terms of the accuracy and reliability of AI-based technologies, the costs of deploying them on a large scale, and the constraints in harnessing high-quality data for... Northeastern researchers and high-level tech industry leaders gathered for a daylong summit on the Oakland campus to share best AI practices and challenges. OAKLAND, Calif. — Humans benefit most from AI when it helps them do tasks that would otherwise seem impossible, such as spotting new financial fraud trends, a Northeastern University cybersecurity researcher said. Jessica Staddon, a computer science professor at Northeastern’s Oakland campus, said artificial intelligence can scan data for potential threats while a human reviews the results — “needle-in-the-haystack work” that would otherwise be too time-consuming.
However, every business uses AI differently, making it hard to create universal standards. That’s why Northeastern researchers and industry leaders recently gathered for a summit at the Oakland campus to discuss best practices and challenges. “Trust in technology is the most important resource that we have,” Rod Boothby, CEO of IDPartner Systems, told nearly 500 attendees. “We still have an opportunity for a trust infrastructure, but we need leadership and collaboration like this event that Northeastern could pull together.” Tracking AI breakthroughs and innovators We’re breaking new ground in addressing critical world issues through AI-infused learning and discovery.
See how we’re supercharging our impact through AI. Latest ways we are shaping an enhanced future. Northeastern researchers develop AI app to help speech-impaired users communicate more naturally Northeastern researchers develop AI app to help speech-impaired users communicate more naturally Northeastern researchers developed a new model to test for AI sycophancy and how much it impacts their accuracy and rationality. If you’ve spent any time with ChatGPT or another AI chatbot, you’ve probably noticed they are intensely, almost overbearingly, agreeable.
They apologize, flatter and constantly change their “opinions” to fit yours. It’s such common behavior that there’s even a term for it: AI sycophancy. However, new research from Northeastern University reveals that AI sycophancy is not just a quirk of these systems; it can actually make large language models more error-prone. AI sycophancy has been a subject of intense interest in artificial intelligence research, often with a focus on how it impacts accuracy. Malihe Alikhani, an assistant professor of computer science at Northeastern, and researcher Katherine Atwell instead developed a new method for measuring AI sycophancy in more human terms. When a large language model, the type of AI that processes, understands and generates human language like ChatGPT, shifts its beliefs, how does that impact not only its accuracy but rationality?
The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change. The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor... From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been done before. “Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws.
So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.” Continue reading at Northeastern Global News. Ricardo Baeza-Yates has been awarded the 2024 Chilean National Prize for Applied and Technological Sciences, recognizing his nearly 40-year career in computer science. Baeza-Yates has been a pioneer in the field, helping establish Chile’s Center for Web Research and mentoring many Ph.D. students.
Since becoming the Director of Research at Northeastern’s Institute for Experiential AI, Ricardo has actively participated as an expert in many global initiatives, committees, and advisory boards related to Responsible AI. These include the Global AI Ethics Consortium, Global Partnership on AI, Inter-American Development Bank’s fAIr LAC Initiative (Latin America and the Caribbean), and ACM’s Technology Policy Subcommittee on AI and Algorithms (USA). Baeza-Yates focuses on “responsible AI,” emphasizing the accountability of developers and researchers in ensuring ethical use of technology.
People Also Search
- The Institute for Experiential AI - Northeastern University | News
- The Institute for Experiential AI - Northeastern University
- Meet CRAIG, the Center Making Responsible AI the Norm
- How AI Will Change Targeted Healthcare: Northeastern Workshop
- Human Oversight is Key to Responsible AI, Say Leaders at Summit
- AI News & Events | Northeastern University
- Northeastern University establishes $50M AI institute to address ...
- AI sycophancy is not just a quirk, it's a liability, new research finds
- Meet CRAIG, Northeastern's groundbreaking responsible AI center
- Northeastern University Institute for Experiential AI director of ...
Subscribe To Our In The AI Loop Newsletter. We Are
Subscribe to our In the AI Loop newsletter. We are a research institute advancing Responsible AI solutions, solving high-impact challenges, and pioneering experiential AI education. Experiential AI is simply AI with a human-in-the-loop. World class experts solving the biggest challenges in Life Sciences, Health, Climate, Responsible AI, and Generative AI. Our 90+ faculty members from across Northe...
Applying The Latest Techniques From The Field Of Artificial Intelligence
Applying the latest techniques from the field of artificial intelligence to solve real-world problems. The Center on Responsible Artificial Intelligence and Governance unites higher ed and AI leaders like Meta, bringing academic rigor to real-world industry problems. The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers a...
“Companies Don’t Really Have The Infrastructure For That,” Said Basl,
“Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws. So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that...
Ideally, Artificial Intelligence Should Make Going To The Doctor An
Ideally, artificial intelligence should make going to the doctor an easier and less stressful experience. “If AI is working the way that we envision it, you actually won’t notice a lot of direct impact,” says Sam Scarpino, the AI+Life Sciences director at Northeastern University’s Institute for Experiential AI. “What you’ll notice is that more pharmaceuticals are coming to market for treating incr...
Yet The Reality Is That For As Much Progress We
Yet the reality is that for as much progress we have made in AI — with its uses to develop drugs, help diagnose cancers and optimize medical data management — it is not used... There are still many challenges to overcome in terms of the accuracy and reliability of AI-based technologies, the costs of deploying them on a large scale, and the constraints in harnessing high-quality data for... Northea...