GenAI: Building an AI Quiz App with Next.js and Gemini AI
Generative AI (GenAI) is a rapidly evolving field. To gain hands-on experience and assist my son's studies, I developed a quiz generator. It’s a simple concept: input notes, output an ai-generated quiz. This post delves into the technical aspects of building this application using Gemini AI, Google's latest AI model, and Next.js. I'll share how I leveraged the free Flash model to create a practical tool that can benefit students. At present, it's Flash model is available free of charge, and therefore makes a great choice for learning and experimentation.
Here's an image of the editor that will generate the quiz.
While this tutorial is set within the context of a Next.js application, we'll focus mainly on the specific techniques and strategies for building the AI functionality.
First, let's set up our development environment. We'll need Google's generative AI package to communicate with the Gemini API:
$ pnpm add @google/generative-ai
To use Gemini AI, you'll need an API key from Google AI Studio. Head over to the Gemini API page to get started. Note that if you can't access Google AI Studio, Gemini might not be available in your region, or Google may need additional verification of your age.
Once you have your API key, add it to your project's environment variables:
GEMINI_API_KEY=<your-api-key>
Our frontend serves two main purposes: collecting user input (their study notes) and displaying the AI-generated quiz. Rather than directly communicating with Gemini, we'll route these interactions through our API layer for better security and control.
Let's start by setting up our component's state management. We need to track:
- User input (the study notes)
- Quiz data from Gemini
- Loading states
- Potential errors
const AiQuiz: React.FC = () => {
const [question, setQuestion] = useState<string>('');
const [quiz, setQuiz] = useState<QuizState>({ questions: [] });
const [error, setError] = useState<string | null>(null);
const [isLoading, setIsLoading] = useState<boolean>(false);
// more code here...
}
Handling API Communication
Next, we'll create the function that bridges our frontend with our backend API. This function will:
- Send the user's notes to our API (/api/ai/quiz)
- Handle the response from Gemini (parse the response and return the quiz data)
- Manage loading states and error handling
// ...previous code here
const generateQuiz = async (questionText: string): Promise<QuizState> => {
const response = await fetch('/api/ai/quiz', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ question: questionText }),
});
if (!response.ok) {
throw new QuizAPIError(
response.status,
`Failed to generate quiz: ${response.statusText}`
);
}
const data = (await response.json()) as QuizResponse;
try {
return { questions: JSON.parse(data.data) };
} catch (e) {
throw new Error('Failed to parse quiz data');
}
};
const handleSubmit = async () => {
if (!question.trim()) {
setError('Please enter some text before submitting');
return;
}
try {
setError(null);
setIsLoading(true);
const newQuiz = await generateQuiz(question);
setQuiz(newQuiz);
} catch (error) {
if (error instanceof QuizAPIError) {
setError(`API Error: ${error.message}`);
} else {
setError('An unexpected error occurred. Please try again.');
}
console.error('Error:', error);
} finally {
setIsLoading(false);
}
};
// more code here...
};
Finally, let's build the interface where users can input their notes and view the generated quiz:
// ...previous code here
return (
<section className={container}>
<article className={textareaContainer}>
<h1 className={headingStyles}>AI QuizGen</h1>
<InputTextarea
label="Please copy and paste your notes below"
id="question"
value={question}
onChange={(e) => setQuestion(e.target.value)}
isLoading={isLoading}
aria-invalid={!!error}
aria-describedby={error ? 'error-message' : undefined}
/>
{error && (
<p id="error-message" className={errorStyles} role="alert">
{error}
</p>
)}
<Button
variant="primary"
onClick={handleSubmit}
disabled={isLoading || !question.trim()}
>
{isLoading ? 'Generating Quiz...' : 'Submit'}
</Button>
</article>
<article className={quizQuestionStyles} data-testid="quiz-questions">
{quiz.questions?.length > 0 && <QuizMaster initialState={quiz} />}
</article>
</section>
);
Integrating with an AI is best performed on the backend. There are several reasons for this:
- Enhanced Security: Keeps API keys and sensitive data secure by preventing exposure on the client side.
- Better Control: Allows for efficient management of rate limiting and quota usage, optimizing API resource utilization.
- Improved Performance: Reduces the computational load on client devices, leading to a smoother user experience, especially on less powerful hardware.
- Caching Capabilities: Enables easier implementation of caching mechanisms to store and reuse common AI responses, reducing the number of API calls and costs.
- Seamless Updates: Facilitates straightforward updates and maintenance of AI functionality without requiring changes on the client side, ensuring all users benefit from the latest features.
Next.js offers a straightforward way to build an API layer through its app/api
directory, allowing developers to create serverless API routes directly within their application. Each API route supports various HTTP methods and can handle tasks like data fetching and form submissions.
To proceed, we'll create a new file called route.ts
, under the 'app/api/ai/quiz' folder. The logic in this file will handle the following:
- Communicating with Gemini AI
- Formatting the response
- Handling errors
Our route will be a POST
endpoint and will look like this:
import { NextRequest, NextResponse } from 'next/server';
import {
GoogleGenerativeAI,
HarmCategory,
HarmBlockThreshold,
} from '@google/generative-ai';
/**
* Route to generate a quiz based on the text supplied by the user
* @param request
* @returns
*/
export async function POST(request: NextRequest) {
// all our logic will go here...
//1. Extract the question from the request body
//2. Configure the Gemini client
//3. Call the Gemini API to generate the quiz
//4. Return the response
}
Two things to note in this snippet:
- The import of the
@google/generative-ai
package. The utilities in this package are used to communicate with the Gemini API.
- There's a number of steps in setting up the request to the Gemini API, and handling the response. These steps are identified in the comments, and we'll cover each one in detail in the following sections.
1. Extract the question
parameter from the request body
In the snippet below, we extract the question
parameter from the request body. This is the text that the user has supplied, and which we'll base the quiz on.
const body = await request.json();
const { question } = body;
The next step is to configure the Gemini client.
const apiKey = process.env.GEMINI_API_KEY;
const genAI = new GoogleGenerativeAI(apiKey as string);
// Configure the generation settings
const generationConfig = {
temperature: 1, // Controls randomness
topP: 0.95, // Nucleus sampling
topK: 64, // Limits token selection
maxOutputTokens: 8192, // Maximum response length
responseMimeType: 'application/json', // Ensure we ask for a JSON response
};
const model = genAI.getGenerativeModel({
model: 'gemini-2.5-flash',
generationConfig,
});
const safetySettings = [
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
{
category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
{
category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
];
This involves:
- Setting up the API key & instance of the Gemini client
- Setting up the generation configuration
- Creating a new instance of the Gemini client. It's here we specify the model we want to use, and the generation settings. In our case, we're using the Gemini Flash model
- Setting up the safety configuration. The implementation includes comprehensive safety settings to prevent inappropriate content:
- Blocks harassment
- Prevents hate speech
- Filters sexually explicit content
- Screens dangerous content
const promptText = prompt(question, 5);
const result = await model.generateContent({
contents: [{ role: 'user', parts: [{ text: promptText }] }],
safetySettings,
});
When we call the Gemini API, we need to provide a prompt. This is not just the text that we want to base the quiz on. It will additionally contain instructions for the model on how to generate the quiz. To do this, we declare a prompt template, and then fill it with the actual text that the user has supplied. Additionally, we specify the number of questions we want to generate.
Here's a simplified version of the prompt template:
/**
* Prompt used to generate quiz questions
* based on text supplied by user (e.g. a chapter of a book or their notes)
*/
const prompt = (text: string, numQuestions: number) => `
Generate a multiple-choice quiz based on the following text:
${text}
Format the output as a single JSON array containing question objects.
Each question object should have the following structure:
{
"id": 1,
"question": "What is the capital of France?",
"options": [
{"id": "a", "content": {"type": "text", "value": "London"}},
{"id": "b", "content": {"type": "text", "value": "Berlin"}},
{"id": "c", "content": {"type": "text", "value": "Paris"}},
{"id": "d", "content": {"type": "text", "value": "Madrid"}}
],
"correctAnswer": "c"
}
Generate ${numQuestions} questions in this format, ensuring:
1. Each question is about a key concept or detail from the text.
2. There are four answer options per question, with one correct answer and three plausible but incorrect answers.
3. The correct answer is randomly assigned to option a, b, c, or d.
4. If an option contains code, set the "type" to "code" instead of "text".
Once we have the response from Gemini, we need to format it in a way that our frontend can understand:
/**
* Route to generate a quiz based on the text supplied by the user
* @param request
* @returns
*/
export async function POST(request: NextRequest) {
try {
// steps 1-4 ...
return NextResponse.json(
{
message: 'Success',
data: result.response.text(),
},
{ status: 200 },
);
} catch (error) {
console.error('Error processing request:', error);
return NextResponse.json(
{ error: 'Internal Server Error' },
{ status: 500 },
);
}
}
Finally, here's what the result looks like:
When the app is deployed, it initially functioned as expected and then threw this error:
Tip: Copy and paste the response from Gemini into a JSON validator to check for any issues. One such resource is: http://json.parser.online.fr/
It becomes clear that the response from Gemini is not valid JSON. This is a case of the AI hallucinating. The solution to this problem is to utilise
Structured Outputs.
To achieve this, we need to declare a response schema. This tells the Gemini API what we want the response to look like.
import {
GoogleGenerativeAI,
HarmCategory,
HarmBlockThreshold,
SchemaType,
} from '@google/generative-ai';
const responseSchema = {
type: SchemaType.ARRAY,
items: {
type: SchemaType.OBJECT,
properties: {
id: {
type: SchemaType.INTEGER,
minimum: 1,
},
question: {
type: SchemaType.STRING,
},
options: {
type: SchemaType.ARRAY,
items: {
type: SchemaType.OBJECT,
properties: {
id: {
type: SchemaType.STRING,
enum: ['a', 'b', 'c', 'd'],
},
content: {
type: SchemaType.OBJECT,
properties: {
type: {
type: SchemaType.STRING,
enum: ['text', 'code'],
},
value: {
type: SchemaType.STRING,
},
},
required: ['type', 'value'],
},
},
required: ['id', 'content'],
},
minItems: 4,
maxItems: 4,
},
correctAnswer: {
type: SchemaType.STRING,
enum: ['a', 'b', 'c', 'd'],
},
},
required: ['id', 'question', 'options', 'correctAnswer'],
},
minItems: 1,
};
export async function POST(request: NextRequest) {
// previous code ...
const generationConfig = {
temperature: 1, // Controls randomness
topP: 0.95, // Nucleus sampling
topK: 64, // Limits token selection
maxOutputTokens: 8192, // Maximum response length
responseMimeType: 'application/json', // Ensure we ask for a JSON response
responseSchema: responseSchema, // Ensure we use the schema we defined above
};
Firstly, we import the SchemaType
enum from the @google/generative-ai
package. Next, we define the response schema. This tells the Gemini API what we want the response to look like. Lastly we update the generation configuration to include the response schema, which will guide the ai to prevent hallucinatory responses.
Once the structured output is in place, the app functions as expected.
In this post, we've built an AI quiz app using Next.js and Gemini AI. We've covered how to set up your development environment, build the frontend interface, and integrate the backend API. We've also discussed the importance of structured outputs when working with AI models.