Brainwaves
Full-Stack Developer
An edutainment app for crititcal thinking
What is Brainwaves?
Welcome to Brainwaves, where in an effort to create an application that helps disadvantaged communities using AI integration, our team created an edutainment app that is dedicated to improving the problem solving and critical thinking capabilities of specially-abled individuals specifically targeted towards students ranging from 13 to 17 years old through puzzle solving whilst being assisted with the friendly AI companion Wimmy the Whale.
The Problem
Students with neurological disorders such as attention deficit hyperactivity disorder (ADHD), often struggle in school environments regarding their academic capabilities. The main issue with the majority of students with such neurological disorders comes down to their critical thinking capabilities, and our school systems in today's age currently lack sufficient accommodation for these students to academically thrive.
Our Solution
Through conducted research, our team found that in order to improve our user’s critical thinking, we could provide them with a framework they could use to consistently break-down problems through puzzle-solving, and through practice of the framework, they can improve their critical thinking abilities. Furthermore, through AI integration, an AI companion offers consistent support and feedback to the user to create a warm and welcoming experience.
My Role in Brainwaves
Whilst creating Brainwaves, I was fully involved in the entire design process of the app and took on the role of Full-Stack Developer. I was involved in user research and participated in the creation of the app’s UI elements and UX optimization. However, I was mainly in charge of bringing these ideas and designs to life through the development process, coding Brainwaves’s interface design and app functionalities to create a gamified learning experience with AI integration through the app’s friendly companion Wimmy the Whale!
In addition to coding the front-end aspects of Brainwaves, I was also in charge of the app’s back-end development, creating an account system and developing a database that can be used to better personalise the app and overall take the user on a journey of puzzle solving with a wide range of different learning paths in order to develop their critical thinking skills.
The Design Process
When designing Brainwaves we aimed to provide an engaging and fun look and feel for our target audience with inspiration taken from apps such as Duolingo and Kahoot’s visual style.
Research
Initial user research and competitive analysis was necessary to determine the viability and authenticity of Brainwaves’s ideas in today’s market and landscape. Conducting competitive analysis allowed us to evaluate the scope of the market we are attempting to enter with Brainwaves and what can be done better with our app to stand out. User research through a questionnaire allowed us to identify our target demographic and gave insight on what additional features may need to be added to accommodate our target market.
Through the use of Google Forms our team crafted a comprehensive questionnaire which was sent to the public to be filled out. The questionnaire we crafted inquired about the following information about our users:
- Their demographic information (Age, gender, etc.)
- Their academic and critical thinking capabilities
- Their academic habits
- Their interests in our app idea
Competitive analysis was done with other brain training apps such as “Elevate”, and the data collected from the user responses was visualized and reviewed. Conducting this research allowed us to establish a vision and direction for Brainwaves, allowing us to begin the design process.
Colour Palette
When discussing the app’s colour palette, our team wanted Brainwave to adopt an oceanic theme with tones of blue as the primary colour to capture a calming and soothing feel in the experience the app offers. Furthermore, we wanted these tones of blue to be accompanied by gold colours as a secondary to bring a sense of vibrance to the app.
Typography
For the main typeface of Brainwaves, our team wanted to choose a font that is clear and easy to read and not too straining on the user’s eyes. Therefore, Poppins is chosen for its legibility and clarity in order to give the user an easy and rewarding experience.
Low-Fidelity Prototype
After establishing the brand identity and the style of Brainwaves, our team began sprinting various low fidelity layout designs where 4 members produced different iterations in an attempt to design a fun and engaging user workflow. The best features and designs from each iteration were combined to create a definitive LoFi design.
High-Fidelity Prototype
Initial usability testing of the LoFi design allowed for insight on Brainwaves’s layout design and gave opportunities for additional app features. Before the production of a High-Fidelity prototype, I began sprinting Brainwaves’s UI components and experimented with various colour combinations, and we produced a HiFi prototype using the sprinted components.
Marketing Campaign
In order to spread the word about our app, our team came up with a marketing campaign for the launch of Brainwaves. I designed the marketing campaign of Brainwaves with young adolescents that experience ADHD or any other attention disorder and main struggle with critical thinking as our target segment. With the segment’s pain points in mind, I crafted a marketing landing page containing a unique selling proposition that speaks to our users, followed by an outline of the app’s main functionalities and user testimonials. This landing page is also designed following search engine optimization practices.
The Development Process
Preliminary Considerations
After designing Brainwaves’s UI elements and producing a HiFi prototype, I took charge as the lead developer for our app and made the executive decisions regarding the development process. Further usability testing from our HiFi prototype found the importance of the presence of our AI companion Wimmy the Whale throughout the entire app, and we figured that Wimmy can be best utilized during the puzzle solving functionalities to give our users a comforting and smooth experience. Constant usability testing throughout the development process was also necessary in order to ensure a smooth user experience.
Development Tools and Frameworks
- Visual Studio Code
- Android Studios
- Figma
- React Native Expo
- Firebase
- OpenAI
- AWS Lambda
The Puzzles
The puzzles are the main functionality in Brainwaves. Therefore, I deemed it our MVP and gave it the most polish. When developing the puzzles sections, I wanted to ensure that it was not a static experience. I wanted it to have a gamified feel where the user would enter a level where they solve a set of puzzles and unlock other levels once they accumulate enough points. I achieved this through the use of React’s useContext and UseMemo, storing points and progress, and unlocking other levels based on the user's progress.
import { useState, useMemo, useContext } from 'react';
const [puzzleType, setPuzzleType] = useState('');
const [logicLevel, setLogicLevel] = useState(1);
const [numberLevel, setNumberLevel] = useState(1);
const [patternLevel, setPatternLevel] = useState(1);
const [numberProgress, setNumberProgress] = useState(0);
const [logicProgress, setLogicProgress] = useState(0);
const [patternProgress, setPatternProgress] = useState(0);
const appContext = useMemo(() => {
return {
puzzleType, setPuzzleType,
logicLevel, setLogicLevel,
numberLevel, setNumberLevel,
patternLevel, setPatternLevel,
numberProgress, setNumberProgress,
logicProgress, setLogicProgress,
patternProgress, setPatternProgress,
}
});
export const logicProblems = [
{
title: "Question #",
id: "Level #",
type: "M/Q",
description: "[PUZZLE DESCRIPTION]",
answer: "[CORRECT OPTION]",
explanation: "[PUZZLE EXPLANATION]",
options: ['OPTION 1', 'OPTION 2',
'OPTION 3', 'OPTION 4']
},
{
...
},
]
By using a database of puzzles instead of statically coding them in, I was also able to develop a system where certain puzzles get used in a level with randomized options to create a more challenging experience.
I also added additional elements such as a number of attempts per puzzle question to make the puzzle solving experience less stressful. With the established databases and systems put in place, I was able to develop levels that pulls each puzzle for the created database and lays it out in a multiple choice style quiz, and by using certain React libraries such as UseState, I was able to create a streamlined and dynamic way to generate several levels of puzzles with implemented systems such as a set number of attempts.
import { useState } from 'react'
import OptionBtn from "../components/Atoms/OptionButton";
import { logicProblems } from "../data/wordProblems";
export default function PuzzlePage() {
const [data, setData] = useState(logicProblems)
const [quesIndex, setQuesIndex] = useState([0, 1, 2, 3]); // The indexes of the array in the database
const [currentQuestion, setCurrentQuestion] = useState(0) // The question the screen would display
const [optIndex, setOptIndex] = useState(shuffle([0, 1, 2, 3])); // randomly shuffles answers in the database to display
const handleAnswer = (choice, answer) => {
// enter puzzle logic here
}
return (
<>
<View>
<Text>Attempts: {attempt}</Text>
<QuestionBox style={styles.text_container} text={data[quesIndex[currentQuestion]].description} />
</View>
<View>
<OptionBtn
name={data[quesIndex[0]].options[optIndex[0]].toUpperCase()}
onPress={() => handleAnswer(data[quesIndex[0]].options[optIndex[0]], data[quesIndex[0]].answer)}
/>
<OptionBtn
name={data[quesIndex[0]].options[optIndex[1]].toUpperCase()}
onPress={() => handleAnswer(data[quesIndex[0]].options[optIndex[1]], data[quesIndex[0]].answer)}
/>
<OptionBtn
name={data[quesIndex[0]].options[optIndex[2]].toUpperCase()}
onPress={() => handleAnswer(data[quesIndex[0]].options[optIndex[2]], data[quesIndex[0]].answer)}
/>
<OptionBtn
name={data[quesIndex[0]].options[optIndex[3]].toUpperCase()}
onPress={() => handleAnswer(data[quesIndex[0]].options[optIndex[3]], data[quesIndex[0]].answer)}
/>
</View>
</>
)
}
const [points, setPoints] = useState(0);
const [attempt, setAttempt] = useState(3);
const [currentScreen, setCurrentScreen] = useState(1);
const handleAnswer = (choice, answer) => {
if (choice === answer) {
setNumber(number + 1);
if (currentScreen < 4) {
setAttempt(3);
setCurrentQuestion(currentQuestion + 1);
setOptIndex(shuffle(optIndex));
setCurrentScreen(currentScreen + 1);
} else {
navigation.push("Feedback", {
points: number,
questions: [data[quesIndex[0]].description, data[quesIndex[1]].description, data[quesIndex[2]].description, data[quesIndex[3]].description]
});
}
} else {
setAttempt(attempt - 1);
if (attempt === 1) {
if (currentScreen < 4) {
setAttempt(3);
setCurrentQuestion(currentQuestion + 1);
setOptIndex(shuffle(optIndex));
setCurrentScreen(currentScreen + 1);
} else {
navigation.push("Feedback", {
points: number,
questions: [data[quesIndex[0]].description, data[quesIndex[1]].description, data[quesIndex[2]].description, data[quesIndex[3]].description]
});
}
}
}
}
Once I was able to develop the puzzle levels’ layouts and functionalities. I also developed the logic how the puzzles were operated. I implemented a point system where when the user goes through the level, they receive a point for each correct answer, and if they get an answer wrong, they lose an attempt and have a chance to try again for a total of 3 attempts. Losing all three attempts would let the user proceed to the next puzzle. Once the user finishes the fourth puzzle, the data from their responses are sent to a screen where they are used to provide the user with feedback on the level.
AI Integration
We decided early on in the design process that Wimmy would be the main form of AI integration. I programmed Wimmy as the app’s AI companion through the use of the OpenAI API’s gpt-4 model. The API requests are made through the use of a Lambda function with AWS. Furthermore, I allowed the AI to adopt the personality of a friendly and fun whale companion through the API request in the Lambda function.
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.EXPO_PUBLIC_APIKEY,
})
export const handler = async (event) => {
const post = typeof event.body === "string" ? JSON.parse(event.body) : event.body;
if(!post?.question) {
return false
}
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{
role: "system",
content: "Your name is Wimmy. [INSERT PERSONALITY]"
}, {
role: "user",
content: post.question
}],
});
const response = {
statusCode: 200,
body: JSON.stringify(completion),
};
return response;
};
export async function getHint(prompt) {
const response = await fetch("[AWS LAMBDA LINK]", {
body: JSON.stringify({ question: `Hey Wimmy. Give me a broken down hint of this question: "${prompt}".` }),
method: "post"
});
return response.json();
}
export async function getChat(prompt) {
const response = await fetch("[AWS LAMBDA LINK]", {
body: JSON.stringify({ question: `${prompt}. Be brief and concise` }),
method: "post"
});
return response.json();
}
export async function getFeedBack(quesOne, quesTwo, quesThree, quesFour) {
const response = await fetch("[AWS LAMBDA LINK]", {
body: JSON.stringify({ question: `Hey Wimmy. give me a detailed breakdown of these questions listed: 1. ${quesOne} 2. ${quesTwo} 3. ${quesThree} 4. ${quesFour}` }),
method: "post"
});
return response.json();
}
Once the AI API request was set up in an AWS Lambda function, I was able to fully integrate it into Brainwaves. I decided to create a separate file of AI request functions fetching the API from the lambda function and giving OpenAI various prompts. These functions include getHint(), getChat(), and getFeedback().
It is important to note that the response from these requests are all in Wimmy the Whale’s character.
- getHint(): asks the AI to generate a detailed and broken-down hint for the questions inputted in the function
- getChat(): answers the inputted prompt. this function is used in a chat box implemented in the app.
- getFeedback(): takes the puzzles from a level as inputs and generates a detailed breakdown of them
These request functions where then used on other screen through simple imports and were able to bring AI functionalities to Brainwaves. For example, getHint() was used in the puzzle solving section of the app by placing it in a handleSend() function and be called each time Wimmy’s tail at the bottom of the screen is pressed to return a broken down hint in Wimmy’s persona. This was also used to request the AI for constant feedback throughout the app. Furthermore, I also created a chat function where the user can talk to Wimmy and ask him questions using getChat(). All of these AI functionalities were implemented with the goal to help our users improve their critical thinking.
const [aiResponse, setAIResponse] = useState('');
const handleSend = async () => {
try {
console.log("Start AI Response");
const completion = await getHint(data[quesIndex[currentQuestion]].description);
setAIResponse(completion.choices[0].message.content);
console.log("AI Response Completed");
} catch (error) {
throw new Error(error);
}
}
App and Account Systems
In order to create a more personalized experience when using Brainwaves, our team decided to design an account system where the user can set up an account in order to save their progress and have their own profiles. In order to achieve this, I used Firebase to allow our users to set up an account using their emails and a password.
import { getAuth, createUserWithEmailAndPassword }
from 'firebase/auth';
const [email, setEmail] = useState("");
const [password, setPassword] = useState("");
const addUser = async () => {
if (email && password) {
const auth = getAuth();
const results = await createUserWithEmailAndPassword(auth, email, password);
console.log(results.user);
} else {
alert('Please fill in all textfields')
}
}
import { setDoc, doc } from "firebase/firestore";
import { db } from './firebaseConfig';
import { AppContext } from '../context/AppContext';
const { userName, wimPoints, pfp, numberProgress, numberLevel, logicProgress, logicLevel, patternProgress, patternLevel, isDyslexic } = React.useContext(AppContext);
const addUser = async () => {
if (email && password) {
const auth = getAuth();
const results = await createUserWithEmailAndPassword(auth, email, password);
const userRef = doc(db, "users", results.user.uid);
setDoc(
userRef,
{
userName: userName,
avatar: pfp,
wimPoints: wimPoints,
numberProg: numberProgress,
logicProg: logicProgress,
numberLvl: numberLevel,
logicLvl: logicLevel,
patternProg: patternProgress,
patternLvl: patternLevel,
},
{ merge: true }
)
} else {
alert('Fill in all textfields')
}
}
Once users create an account, I was able to store information such as their user name, level progress, and app points in a database in Firebase, which is tied to their account. Therefore, whenever users sign up or log into their accounts, their progress is saved. Additionally, I implemented a way users can upload their own profile pictures for more personalization.Â
To address Brainwaves's accessibility, I also developed a settings page where users can customize the look of the app how they see fit. I implemented a dark/light mode and a colour blind mode where the app's colours shift to purple tones to accommodate for colour blindness. These settings customizations were also stored within the Firebase database and saved based on user preferences.
import { useState, useEffect } from 'react';
import { AppContext } from '../context/AppContext.js';
const {
isDarkTheme, setIsDarkTheme, isColorBlind, setIsColorBlind, isDyslexic, setIsDyslexic
} = useContext(AppContext)
// Dark/Light theme switch
<Switch
onValueChange={() =>
setIsDarkTheme(current => !current)}
value={isDarkTheme}
/>
// Colour blind theme switch
<Switch
onValueChange={() =>
setIsColorBlind(current => !current)}
value={isColorBlind}
/>
Challenges and Key Takeaways
During the beginning of the development process, I was very new to OpenAI and the use of AI in our projects. Therefore, my initial iterations of AI integration were quite unoptimized. However, Developing Brainwaves gave me the opportunity to learn to use OpenAI to its full potential.Â
The use of databases and the use of Firebase at first posed a challenge as it seemed quite overwhelming. However, with the implementation of these databases, I was able to create a dynamic and somewhat industry ready application in the span of one month.
Points of Improvements
Brainwaves at its current stage sends up to five requests to OpenAI per level for its AI functionalities. If Brainwaves were to be published for the public, the number of requests to OpenAI would get excessive and quite expensive. Furthermore, the excessive amount of requests would sometimes crash the app. A more sustainable method could be to send one request to OpenAI at the beginning of each level to generate the hints and feedback beforehand instead of 5 requests per level.Â
The current puzzle and level system in Brainwaves utilizes a database of puzzles that it pulls from to create a dynamic and smooth experience. This database is created locally in the app files. However, the development cycle would be much easier when it comes to future app updates if the database was pulled from Firebase instead. This would allow for easier addition of puzzles in the database for future updates.
Conclusion
I designed and developed an edutainment app dedicated to improving the critical thinking capabilities of neurodivergent individuals. I was fully involved in the research, design, and development process. I had the opportunity to develop Brainwaves using the React Native Expo framework and stepped into new territory with the use of the OpenAI API to integrate AI functionalities and Firebase in order to establish a database and account system for our app.