Meet Craig Northeastern S Groundbreaking Responsible Ai Center

Bonisiwe Shabane
-
meet craig northeastern s groundbreaking responsible ai center

The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change. The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor... From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been done before. “Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws.

So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.” Continue reading at Northeastern Global News. The Center on Responsible Artificial Intelligence and Governance unites higher ed and AI leaders like Meta, bringing academic rigor to real-world industry problems. The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change.

The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor... From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been done before. “Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws. So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.”

In addition to Northeastern, faculty from Ohio State University, Baylor University and Rutgers University form the core of CRAIG’s research arm. Meta, Nationwide, Honda Research, CISCO, Worthington Steel and Bread Financial are already involved on the industry side, with more partners being brought into the center. The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change. The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor... From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been done before.

“Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws. So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.” In addition to Northeastern, faculty from Ohio State University, Baylor University and Rutgers University form the core of CRAIG’s research arm. Meta, Nationwide, Honda Research, CISCO, Worthington Steel and Bread Financial are already involved on the industry side, with more partners being brought into the center.

Typically, responsible AI is the first element that gets cut out of any AI-related project at a company, said Cansu Canca, director of Northeastern’s Responsible AI Practice. CRAIG addresses that core tension by letting private industry partners identify problems they face in this area. The researchers at CRAIG then propose research projects that address those specific challenges. The Center for Responsible Artificial Intelligence and Governance (CRAIG) is a first-of-its-kind, National Science Foundation-funded effort that pairs academic rigor with real industry problems. The goal: move responsible AI from policy talk to repeatable practice inside companies. Privacy, regulation, and siloed decision-making are on the agenda-backed by partners who can implement solutions at scale.

Faculty from Northeastern, Ohio State, Baylor, and Rutgers lead the research core. On the industry side, Meta, Nationwide, Honda Research, Cisco, Worthington Steel, and Bread Financial are already in the mix, with more joining. Most companies can comply with laws. Few have the infrastructure to do responsible AI well. CRAIG bridges that gap by letting companies surface concrete pain points while researchers design methods, tools, and studies that meet academic standards. As one researcher put it, this is a call to build what's missing: credible, field-tested approaches that survive contact with production systems and policy audits.

That setup keeps research honest and useful. It also avoids the common trap where responsible AI is the first line item cut from delivery timelines. To get started with generative AI, first focus on areas that can improve human experiences with information. A comprehensive approach to the ethical and responsible use of AI. By encompassing the ethical, legal, and technical aspects of developing and deploying beneficial AI technologies, we ensure your AI system does not cause harm, interfere with human agency, discriminate, or waste resources. Our interdisciplinary team of AI ethicists, responsible AI leaders, computer scientists, philosophers, legal scholars, sociologists, and psychologists collaborate to make meaningful progress, translate ethics in to practice and shape the future of technology.

We work with academic and industry partners across domains including health, criminal justice, finance, and social media to: Establish guidelines for AI ethics governance mechanisms Translate abstract values into practical guiding principles Establishing principles for responsible and safe AI As AI is increasingly integrated into human experiences, ensuring its safe, fair, ethical, and responsible development and use becomes ever more critical. At Northeastern, we are committed to supporting our community in navigating and advancing responsible AI innovation.Explore the latest in AI governance and policies as we assess and map AI’s path forward.

Our AI Advisory Ethics Board offers ethical guidance from top experts in academia, industry, and government. Access independent ethics guidance and advice that’s tailored to your organization’s needs. The AI and Data Ethics Group aims to develop a robust ethics ecosystem for responsible development and use of autonomous, computational, and data driven systems through foundational research, translational research focused on policy and... We help organizations fully harness the power of responsible AI, meet emerging regulatory obligations, enhance technology by integrating ethics into the innovation process, and more. Bias, robustness, privacy, and safety testing. Controls, guardrails, and policy-to-practice.

MVG operating model and continuous monitoring. See how our Responsible AI team partnered with Verizon to operationalize AI governance — from evaluation and mitigation to audit-ready documentation. The National Science Foundation (NSF) recently awarded funding to The Ohio State University and its partner universities to create the Center on Responsible AI and Governance (CRAIG). CRAIG, which will be housed at Ohio State’s Moritz College of Law, will focus on developing the knowledge and workforce required to pursue safe, accurate, impartial, and accountable artificial intelligence (AI). Ohio State and partners Baylor University, Northeastern University, and Rutgers University, partners will identify the most pressing responsible AI challenges and develop scalable solutions and the workforce required to address them. Each of the CRAIG sites will bring faculty from computer science, law, business, ethics, and social science.

That will enable the research teams to take a holistic view that sees all dimensions of the problem and how they interact with one another. That is where truly creative and effective solutions come from. Several industry partners, including leading tech, auto, and insurance companies, have already expressed interest in being part of CRAIG which is funded through the NSF's Industry-University Cooperative Research Center (IURC) program. Their knowledge, in addition to insights from government agencies, will help advance research surrounding responsible AI. “Industry has unique knowledge of the technologies, how they are being used, and existing responsible AI efforts,” Hirsch explained. “Government agencies understand how the public sector is using AI and the public values that inform responsible AI.

And academics bring the research methods, creativity, and commitment to science and objectivity to generate credible solutions. What excites me the most about CRAIG is the prospect of these coming together to solve the hard research problems responsible AI presents.” Northeastern is creating a living lab to model and cultivate how to work with artificial intelligence. AI has the potential to transform our world. At this moment, we have the opportunity to anticipate those changes and map new opportunities.​ True to Northeastern’s legacy of higher ed innovation, we’re leading the way in how universities prepare students, faculty, staff,... As the first higher education partner of Anthropic, Northeastern will continue to set the standard for responsible, ethical integration of AI throughout the university’s global ecosystem, encompassing students, faculty, and staff in the university’s...

Whether you’re a student exploring AI research, a professional looking to advance your expertise, or a business leader driving innovation, our teams are here to guide you every step of the way. Prepare to succeed in an AI world with leading-edge programs

People Also Search

The Push To Ensure Artificial Intelligence Is Deployed Responsibly And

The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change. The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Found...

So, The Idea Was To Create A Center That Was

So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.” Continue reading at Northeastern Global News. The Center on Responsible Artificial Intelligence and Governance unites higher ed and AI leaders like Meta, bringing academic rigor to real-world industry problems. The push to...

The Newly Formed Center For Responsible Artificial Intelligence And Governance

The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor... From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been...

In Addition To Northeastern, Faculty From Ohio State University, Baylor

In addition to Northeastern, faculty from Ohio State University, Baylor University and Rutgers University form the core of CRAIG’s research arm. Meta, Nationwide, Honda Research, CISCO, Worthington Steel and Bread Financial are already involved on the industry side, with more partners being brought into the center. The push to ensure artificial intelligence is deployed responsibly and ethically ha...

“Companies Don’t Really Have The Infrastructure For That,” Said Basl,

“Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws. So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that...