DevSpeak is built on a robust, scalable architecture that leverages the power of Large Language Models (LLMs) to transform informal requirements into formal technical specifications.
1. Frontend (React/Vite): A Single Page Application (SPA) that provides the user interface for inputting requirements, configuring translation parameters, and viewing the output.
2. Backend (Node.js/Express): A serverless API that handles authentication, database interactions, and communication with the LLM.
3. LLM Integration (Google Gemini API): The core translation engine that processes the input and generates the structured output based on the provided configuration.
4. Database (Firestore): A NoSQL document database that stores user profiles, translation history, and configuration settings.
When a user submits an informal requirement, the following pipeline is executed:
1. Input Ingestion: The frontend captures the raw text and the selected configuration parameters (Target Audience, Tech Context, Output Format, Verbosity).
2. Prompt Construction: The backend dynamically constructs a prompt for the LLM, incorporating the input text and the configuration parameters.
3. LLM Processing: The prompt is sent to the Google Gemini API, which synthesizes the structured technical specification.
4. Output Rendering: The generated output is returned to the frontend and rendered in real-time using Markdown syntax highlighting.
5. Persistence: The translation result, along with metadata (timestamp, settings, latency), is saved to Firestore for future reference.