At Making Waves Education Foundation, we envision a future where ethically designed and unbiased artificial intelligence (AI) can revolutionize education by providing tailored support to underrepresented students, fostering an inclusive learning environment, and empowering them to achieve their full potential in college and beyond.
The approach we use to develop and deploy safe and responsible AI resources is grounded in principles that align with our commitment to educational equity and the well-being of our diverse students.
We design and develop AI resources that promote inclusivity and fairness, avoiding biases that may discriminate against certain groups of students based on race, gender, socioeconomic background, or any other protected characteristic.
We collaborate with other educational institutions, organizations, and experts in the AI field to share knowledge, best practices, and resources. We foster partnerships to continuously improve AI resources and promote ethical AI use throughout the education sector.
Our Education and employment copilot works by using an advanced language model from OpenAI called GPT-4o to answer any questions about education and employment. Large language models likeGPT-4o are developed by training them on massive amounts of text from the internet, helping them learn grammar, facts, and reasoning abilities.
When a person sends a question to our copilot, the language model processes it and generates a relevant response based on its training. The more information a person provides, the more accurate and helpful the answer will be.
In addition to answering questions, our copilot also sends “nudges,” or check-in texts, to its users. These messages are tailored to a user’s goals and are written by human experts. They can include reminders, tips, and other useful information related to education and employment exploration.
We developed articles based on our Wave-Maker Success Framework by utilizing the knowledge, insights, and experiences of college coaches, financial services coordinators, and Wave-Makers. We also incorporated key references from research, higher education standards, and career readiness frameworks.
With this information, we partnered with Project Evident to refine the framework and align it with our program priorities. Finally, we used artificial intelligence to generate articles which were then reviewed, edited, and revised by our organization to ensure accuracy and relevance.
Our safety standards align with trust and safety guidelines from OpenAI.
Making Waves Education Foundation employs a Moderation API from OpenAI to minimize the occurrence of unsafe content in AI-generated completions through our chatbot. We are in the early stages of developing a custom content filtration system to complement our current Moderation API.
We have human reviewers examine AI-generated outputs, including regular examinations of outputs through our chatbot. Our human reviewers are informed about the limitations of the AI models used and have access to all necessary information to verify outputs, including relying on their professional expertise.
We use “prompt engineering” on our chatbot to constrain the topic and tone of the AI-generated outputs, reducing the likelihood of producing undesired content. By providing additional context to the mode, we can better steer the AI-generated outputs in the desired direction.
We require users to register to access our chatbot to reduce the likelihood of misuse.
We limit the amount of text users can send and receive to prevent malicious prompt injection and to reduce the likelihood of misuse.
We enable users to report improper functionality or concerns about application behavior easily through email. The inbox is monitored by a human who can respond appropriately.
We are aware of the limitations of language models, such as inaccurate information, offensive outputs, bias, and more. We communicate these limitations to our users through a disclosure at sign-up, as well as a micro-course we developed to promote safe and responsible use of AI. We carefully evaluate if the AI models we use are appropriate for our use case and assess its performance across various inputs to identify potential performance drops.
Making Waves Education Foundation has a policy for disallowed usage of its AI resource to ensure ethical, safe, and responsible use of the technology while preventing potential harm or exploitation of individuals and communities.
Our disallowed usage policy aligns with trust and safety guidelines from OpenAI.
We prohibit the use of our AI model for the following:
The AI model made a few mistakes, including providing incorrect information (12 instances) having hallucinations where it thought it was a real person (9 instances), and generating factual responses to inappropriate requests prompted by users (2 instances).
However, we have upgraded our AI model and have added improved safety features to address these issues and improve its accuracy. Specifically, the current AI model in production, “GPT-3.5-Turbo,” can admit its mistakes, challenge incorrect premises, reject inappropriate requests, and refer to itself as an AI language model. Additionally, we have developed a micro-course for users to teach safe and responsible use of AI as part of sign-up, which highlights when to use AI, when to consult a trusted person, and when to verify information like deadlines and requirements with a primary source.
Overall, after evaluating these results, we determined that our AI model had an accuracy rate of 97.3%. We will continue monitoring the performance of our newly upgraded model and make further improvements as necessary.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |