Skip to content
PressSpaceNewsGraduateFacultyClassroomScienceTechnology

Harmonizing Artificial Intelligence in accordance with human ethics

AI safety research of MIT senior Audrey Lorvo targets minimizing risks linked to artificial intelligence deployment and its impact on humanity.

Artificial Intelligence (AI) safety research led by MIT senior Audrey Lorvo targets decreasing...
Artificial Intelligence (AI) safety research led by MIT senior Audrey Lorvo targets decreasing risks stemming from AI usage, deployment, and human interactions.

Harmonizing Artificial Intelligence in accordance with human ethics

Audrey Lorvo, a computer science, economics, and data science enthusiast, is diving deep into the realm of AI safety. This field is all about making certain that AI models, as they become progressively intelligent, remain dependable and serve humanity's best interests. It's an ever-evolving landscape that tackles technical hurdles like robustness and AI's alignment with human values, while addressing societal concerns like transparency and accountability. A compelling aspect is confronting the potential existential risks posed by increasingly potent AI tools.

Lorvo, an MIT Schwarzman College of Computing Social and Ethical Responsibilities of Computing (SERC) scholar, zeroes in on studying how AI might automate its own research and development processes. As part of the Big Data research group, she's probing the social and economic implications associated with AI's potential to accelerate research on itself. Lorvo underscores the importance of carefully scrutinizing AI's relentless advancements and their repercussions, ensuring organizations have suitable frameworks and strategies to handle risks.

"Ensuring AI doesn't slip out of our control as we edge closer to artificial general intelligence (AGI) is becoming increasingly critical," says Lorvo. AGI is the prospect of AI matching or surpassing human cognitive abilities.

To get a better grasp on the technical aspects of AI safety, Lorvo has participated in the AI Safety Technical Fellowship. This fellowship offers a platform to scrutinize existing research on aligning AI development with thoughtful considerations of potential human impact. "The fellowship gave me valuable insights into the technical questions and challenges surrounding AI safety, potentially paving the way for better AI governance strategies," she shares. Lorvo stresses that companies on the AI frontier are constantly pushing boundaries, necessitating the implementation of effective policies that prioritize human safety without stifling exploration.

Lorvo arrived at MIT with a burning desire to engage in a course of study that would foster collaboration between science and the humanities. The wealth of choices at the Institute forced her to make tough decisions. "There are countless ways to enhance the quality of life for individuals and communities," she says, "and MIT offers various paths for investigation."

Once she started with economics—a subject she enjoys due to its emphasis on quantifying impact—Lorvo examined math, political science, and urban planning before opting for Course 6-14. Professor Joshua Angrist's econometrics classes convinced her economics was the way to go, thanks to the data science and computer science components that appealed to her due to the expanding reach and potential impact of AI. Lorvo has additionally immersed herself in concentrations like urban studies and planning and international development.

"Students at MIT care about the impact they make," Lorvo observes. She's learned a great deal about AI safety from the MIT AI Alignment group. "Marginal impact," the additional effect of a specific investment of time, money, or effort, is a method for measuring how much a contribution adds to what is already being done, rather than focusing on the total impact. This notion influences where people opt to channel their resources, a concept that resonates with Lorvo.

"In a world of limited resources, a data-driven approach to tackling our biggest challenges can benefit from a targeted approach that directs people to where they're likely to do the most good," she says. "If you want to maximize your social impact, considering your career choice's marginal impact can be incredibly valuable."

Lorvo values MIT's focus on holistic pupil development and has benefited from opportunities to delve into disciplines like philosophy through MIT Concourse, a program that encourages dialogue between science and the humanities. "Concourse strives to offer guidance, clarity, and purpose for scientific, technical, and human pursuits," she says.

Outside the classroom, Lorvo dedicates time to creating enriching experiences and fostering connections with her classmates. "I'm blessed to have the flexibility to balance my coursework, research, and club commitments with other activities, such as weightlifting and off-campus initiatives," she says. "There are always so many clubs and events available across the Institute."

These opportunities have broadened Lorvo's perspective, challenged her beliefs, and exposed her to new interest areas that have steered her life and career choices for the better. Fluent in French, English, Spanish, and Portuguese, Lorvo admires MIT for the international experiences it provides for students. "I've interned in Santiago de Chile and Paris with MISTI and helped test a water vapor condensing chamber that we designed in a fall 2023 D-Lab class in collaboration with the Madagascar Polytechnic School and Tatirano NGO, and have enjoyed the opportunities to explore economic inequality through my International Development and D-Lab classes," she says.

As president of MIT's Undergraduate Economics Association, Lorvo networks with fellow students passionate about economics while continuing to expand her understanding of the field. "Even as a senior, I've found new facets of the MIT community to delve into and appreciate," she says. "I encourage other students to keep exploring groups and classes that spark their interests throughout their time at MIT."

Upon graduation, Lorvo envisions herself continuing to delve into AI safety and researching governance strategies that can help ensure the safe and productive deployment of AI. "Effective governance is the lynchpin to AI's successful development and our ability to capitalize on its transformative potential," she says. "We must keep a keen eye on AI's expansion and capabilities as the technology evolves."

Navigating the complexities of technology's impact on humanity, promoting goodness, constant improvement, and nurturing settings for groundbreaking ideas continue to motivate Lorvo. The intermingling of the humanities with the sciences shapes much of what she does. "I've always aspired to contribute to enhancing human lives, and AI represents humanity's greatest challenge and opportunity yet," she says. "I believe the AI safety field can benefit from individuals with interdisciplinary experiences like the ones I've been fortunate to acquire, and I encourage anyone passionate about shaping the future to delve into it."

Enrichment Data:

A comprehensive approach to governing AI safety involves addressing technical challenges, societal concerns, and ensuring human engagement across various aspects. Here's a holistic approach to ensure the beneficial deployment of AI while mitigating risks:

Technical Challenges

  • Risk Assessment and Management:
    • Implement frameworks like ISO/IEC 42001, the NIST AI Risk Management Framework (RMF), and STRIDE, DREAD, OWASP for ML to assess and manage AI risks across the lifecycle.
  • Model Verification and Validation:
    • Enforce rigorous testing procedures to ensure AI systems are reliable, secure, and perform as intended by implementing model inventory management, observability requirements, and validation techniques.
  • Data Management:
    • Create robust data management processes to ensure data quality, integrity, and privacy, fostering trust in AI systems and preventing data-driven risks.

Societal Concerns

  • Regulatory Compliance:
    • Adhere to emerging regulations such as the EU AI Act, focusing on ensuring AI systems are safe and trustworthy for societal use.
  • Public Engagement and Transparency:
    • Encourage open dialogue about AI development and deployment to address societal fears and build trust.
  • Ethical Considerations:
    • Prioritize ethical AI by focusing on fairness, accountability, and social responsibility, incorporating human values into AI systems to align with societal norms.

Human Engagement

  • Stakeholder Involvement:
    • Engage a diverse group of stakeholders, including developers, policymakers, ethicists, and users, in AI governance to ensure that AI systems meet societal needs and are aligned with human values.
  • Education and Awareness:
    • Educate the public about AI benefits and risks, fostering a supportive environment for responsible AI development and deployment.
  • Inclusive Decision-Making:
    • Ensure that AI decision-making processes are transparent and involve human oversight to prevent unintended consequences and ensure accountability.

By addressing technical challenges, societal concerns, and ensuring meaningful human engagement, organizations can effectively govern AI safety and mitigate potential risks while promoting beneficial deployment.

  1. Audrey Lorvo, a scholar at the MIT Schwarzman College of Computing Social and Ethical Responsibilities of Computing (SERC), is researching how AI might automate its own research and development processes.
  2. Lorvo, an advocate for AI safety, emphasizes the need for organizations to carefully scrutinize AI's relentless advancements and their repercussions.
  3. To gain technical insights into AI safety, Lorvo participated in the AI Safety Technical Fellowship, a platform that offers a chance to study existing research on aligning AI development with thoughtful considerations of potential human impact.
  4. Lorvo highlights the importance of studying the impacts of AI on science, technology, computing, space, and society, as well as the role of mental technology and artificial intelligence in these fields.
  5. At MIT, Lorvo chose Course 6-14 after being convinced by Professor Joshua Angrist's econometrics classes, which appealed to her due to the data science and computer science components that emphasize the expanding reach and potential impact of AI.
  6. Lorvo values MIT's focus on holistic pupil development and has benefited from opportunities to delve into disciplines like philosophy through MIT Concourse, a program that encourages dialogue between science and the humanities.
  7. As president of MIT's Undergraduate Economics Association, Lorvo networks with fellow students passionate about economics and continues to expand her understanding of the field.
  8. Upon graduation, Lorvo envisions herself researching governance strategies for the safe and productive deployment of AI.
  9. A comprehensive approach to governing AI safety involves addressing technical challenges, societal concerns, and ensuring human engagement across various aspects, such as risk assessment and management, model verification and validation, data management, regulatory compliance, public engagement and transparency, ethical considerations, stakeholder involvement, education and awareness, and inclusive decision-making.

Read also:

    Latest