The DAIR Program is longer accepting applications for cloud resources, but access to BoosterPacks and their resources remains available. BoosterPacks will be maintained and supported until January 17, 2025. 

After January 17, 2025: 

  • Screenshots should remain accurate, however where you are instructed to login to your DAIR account in AWS, you will be required to login to a personal AWS account. 
  • Links to AWS CloudFormation scripts for the automated deployment of each sample application should remain intact and functional. 
  • Links to GitHub repositories for downloading BoosterPack source code will remain valid as they are owned and sustained by the BoosterPack Builder (original creators of the open-source sample applications). 

AI Chatbot-Powered Web Platform Starter App

Here’s what you’ll find in this Sample Solution:

Introduction

This Sample Solution, from the AI Chatbot-Powered Web Platform Starter App BoosterPack, demonstrates how 1280 Labs used the latest LLMs to develop a starter full-stack application for AI-powered projects.

Problem Statement

In the past year, the Canadian AI startup landscape has experienced significant growth, propelled by notable advancements in Artificial Intelligence (AI) and Machine Learning (ML). These startups have dominated venture capital fundraising rounds and tech news headlines, addressing a diverse range of industry-specific challenges through AI-based products.

While these ventures contribute innovative solutions to various niche problems, most share a common technological foundation. Currently, teams aiming to develop an AI-powered application face the challenge of assembling a skilled web development team proficient in both frontend and backend technologies to build a secure full-stack platform, along with AI expertise to develop the distinctive AI functionality that makes the company unique.

Traditionally, teams would be required to engage specialized frontend and backend developers, investing 2-6 weeks in building the basic functionality of a web platform before delving into the integration of AI features. Our BoosterPack streamlines this process by replacing the initial phase of development setup with a functional full-stack platform equipped with user registration, authentication, and management. It eliminates the necessity to recruit additional specialized developers, as the frontend components and API architecture are pre-built, enabling teams to focus on developing top-notch web platforms with a secure backend API within our pre-configured environment.

Beyond jumpstarting product development roadmaps, adopting our BoosterPack yields cost savings by minimizing reliance on specialized web developers. This approach empowers development teams to achieve more with smaller, agile teams. Additionally, the BoosterPack will be launchable with or without the AI functionality, allowing teams looking to build any web platform to jumpstart their development.

AI Starter App in the DAIR Cloud – Sample Solution

Sample Solution Overview Diagram

The diagram below illustrates the structure of the Sample Solution.

Deploying the BoosterPack will utilize CloudFormation to replicate the project’s infrastructure within your AWS environment. The AWS resources are nested within a Security Group to control network access, and the frontend and backend repositories are hosted in Docker containers within an EC2 instance. The LLMs are hosted externally through Huggingface, OpenAI, Anthropic, and Mistral.

Component Descriptions

Significant components used in the Sample Solution are summarized in the table below:

How to Deploy and Configure

If you’re a DAIR participant, you can deploy the Sample Solution by navigating to the BoosterPack Catalogueand following any instructions on that page for deploying a new instance of the Solution.

Prerequisites:

To utilize the chat functionality, you must configure API keys for the desired third party LLMs at this point. To utilize the APIs, you will need to enter credit card or other payment details in your organization account for each of these tools. Any of Claude, Huggingface, Mistral, and OpenAI can be supplied or omitted.

See the following links to set up accounts on these platforms:

To view your API keys, you can run the following command in your EC2 instance to view all environment variables:

sudo docker exec <container-id> env

Additionally, “NoReplySenderAddress” can be set. This will be used, for instance, when a user requests a password reset through the application. Note that you must verify your ownership of this email address before Amazon SES will send any emails to your users. This can be a simple Gmail account you have access to.

The application will start deploying and will take several minutes. Once the deployment process is complete, you will be able to visit the application using the IP address output as “AppAddress” under the “Outputs” tab.

We can SSH into the EC2 instance we just created using the same IP address. From a shell/terminal that has SSH enabled, run:

ssh -i key_file.pem ec2-user@IP

The following command will reveal the logs of the installation process:

tail -f /var/log/cloud-init-output.log

Once the installation process has completed successfully you will be able to view logs from the backend web application with the following command:

sudo docker logs -f $(sudo docker ps -qf ancestor=ai-starter-ai-backend-starter)

Once the installation process has succeeded you can open the application using your “AppAddress” IP address. 

This section guides you through a demonstration of the AI Web Starter App. Using this BoosterPack is compelling because it provides a starting point for web development with key features already implemented, so users can spend their time implementing features that make their business unique.

This demonstration will illustrate the features available in the AI Web Starter App.

Login & Registration

Users can register and login to the platform. Users can also reset their password, which will send an email to the user redirecting them to the Forgot Password flow, where a new password can be set.

Chat Panel & Conversations

Users can communicate with a variety of open-source and private third-party LLMs provided through Huggingface and OpenAI . Single prompt mode allows users to compare models based on a single prompt and response. Conversation mode allows users to compare conversations with the models, where chat history is saved, and the models are able to access the most recent messages (the number of previous messages in the LLM’s memory is set in the backend, increasing this value will increase token spend as each additional message the LLM stores in memory is processed with each prompt) while formulating responses. You must include your API keys in the backend .env file to utilize this functionality.

Settings

The settings page provides users with an area to change their basic personal information and password.

Component Library

This page demonstrates some of the different reusable components available in the frontend repository. This is useful for users cloning the BoosterPack to jumpstart development of their own app and want to use custom components. For a robust library with many highly functional React components, check out UI libraries like Mantine or Shadcn.

Since all our resources were created with CloudFormation, we can release all resources by simply selecting Delete on the CloudFormation Stack. This process will also take several minutes.

Factors to Consider

The AI Web Starter sample application code is available through a public Github repository at the link below. Users are encouraged to clone this repository and utilize their preferred method of deployment to use this repository as a basis for their projects. Once cloned, users can create one or multiple private or public Github repositories (depending if monorepo architecture is preferred) and push the code to the repository/repositories.

https://github.com/anthonyfierrosoftware/ai-starter

The AI Web Starter sample application can also be used without the LLM functionality for users that want to speed up development of a full-stack web application without any AI integrations. After cloning the repository, utilize the following steps from the Django documentation to remove the aiModule app from the Django REST backend.

https://docs.djangoproject.com/en/5.0/howto/delete-app

On the frontend, you can simply delete the components from the components/LLMs directory and replace the existing Home page with other content. From the state folder, remove modelsConfig.js, and from routes.js, remove the functions fetchConversations and sendChat.

If you want to reuse the CloudFormation deployment with the new repository that you’ve cloned, follow these steps:

  1. Replace the Github URLs in the install.sh file, and the cloudformation.json file with the URL to your own repository.
  2. Upload the new CloudFormation file to S3 to utilize the new CloudFormation template.

The AI Web Starter was designed with the intention that it would be easy to customize, allowing users to utilize whichever LLMs, frontend framework, backend framework, database, or hosting providers they prefer for their specific project.

The needs of each project will determine the best choice for frameworks, tools, and services.

Frontend Alternatives

The AI Web Starter’s frontend application is a client-side application. NextJS or other frameworks can be utilized to implement server-side rendering. The AI Web Starter’s frontend application is written in JavaScript. Typescript, a superset of JavaScript, can also be used depending on project needs.

For more information on server-side rendering vs client-side rendering and other best practices associated with web development, check out our Getting Started with Web Technologies blog here: https://www.1280labs.ca/blog/how-to-build-a-tech-product.

Alternatively, you may also decide that you would prefer to use another frontend web, or even a mobile framework to communicate with the backend API. Individual elements of the BoosterPack’s architecture have been designed to be modular and easily configurable with different technologies.

Backend Alternatives

The Django REST Framework is great for prototyping, as it contains secure user management and an admin panel out of the box, along with other features that many projects would otherwise need to implement over the course of developing a web application. Users are encouraged to utilize whichever backend framework they feel most comfortable in.

Use Django REST Framework When:

  1. Rapid API Development:
    Scenario: You need to build RESTful APIs quickly and efficiently.
    Reason: DRF provides a powerful and flexible toolkit for building APIs with minimal boilerplate code, allowing developers to focus on defining endpoints and business logic rather than low-level details.
    1. Rich Serialization and Validation:
      Scenario: You require robust serialization and validation of data in your API.
      Reason: DRF offers powerful serializers and validators that simplify the conversion of complex data types (e.g., Django models) into JSON or other formats, ensuring data integrity and consistency.
      1. Authentication and Permissions:
        Scenario: You need to implement authentication and authorization mechanisms in your API.
        Reason: DRF provides built-in support for various authentication methods (e.g., token-based, OAuth) and granular permission policies, making it easy to secure your API endpoints.
        1. Browsable API:
          Scenario: You want to provide a user-friendly interface for developers to explore and interact with your API.
          Reason: DRF includes a browsable API feature that generates a navigable HTML interface based on your API endpoints, allowing developers to test, debug, and discover your API resources easily.

          Consider Other Frameworks When:

          1. Microservices Architecture:
            Scenario: You are building a distributed system with microservices architecture and need lightweight, scalable API frameworks.
            Alternative: Consider lightweight frameworks like Flask, FastAPI, or Node.js with Express.js, which offer flexibility and performance benefits for microservices-based architectures.
            1. Real-Time APIs:
              Scenario: You need to build real-time APIs with features like WebSockets or server-sent events.
              Alternative: Consider frameworks specifically designed for real-time communication, such as Django Channels, Socket.IO (with Node.js), or frameworks like FastAPI for asynchronous request handling.
              1. GraphQL APIs:
                Scenario: You prefer using GraphQL for building APIs over RESTful endpoints.
                Alternative: Consider GraphQL-specific frameworks and libraries like Apollo Server (Node.js), Graphene (Python), or Ariadne (Python), which provide tools for defining and querying GraphQL APIs.
                1. Performance and Scalability:
                  Scenario: You anticipate high traffic or have strict performance requirements for your APIs.
                  Alternative: Consider asynchronous frameworks like FastAPI (Python), Tornado (Python), or Node.js with asynchronous libraries, which offer better performance and scalability compared to DRF’s synchronous request-response model.
                  1. Customization and Control:
                    Scenario: You need fine-grained control over your API’s architecture and behavior.
                    Alternative: Consider minimalist frameworks like Flask (Python) or custom-built solutions tailored to your specific requirements, which offer greater flexibility and control at the cost of additional development effort.

                    Database Alternatives

                    The AI Web Starter utilizes a PostgresSQL database as it is the recommended database for use with Django REST. However, users are encouraged to utilize whichever database they feel most comfortable in. Utilizing a vector database, such as Pinecone, will allow the LLMs to easily utilize information from the application user’s namespace while processing prompts. Vector databases are typically used in conjunction with a standard database, storing only the information a chatbot would need to parse.

                    Another consideration is utilizing a SQL-powered database, vs a no-SQL database like MongoDB.

                    When deploying a web application, it’s crucial to focus on multiple layers of security. The following recommendations are suggestions and may not be relevant to your project.

                    On AWS, ensure the use of Identity and Access Management (IAM) roles and policies to restrict permissions, enable multi-factor authentication (MFA), and regularly rotate access keys. Use security groups and Network ACLs to control inbound and outbound traffic and employ AWS Shield and Web Application Firewall (WAF) to protect against DDoS attacks and common web exploits. Encrypt data at rest using AWS Key Management Service (KMS) and in transit with SSL/TLS.

                    For information on setting up a WAF on AWS, see the following documentation:

                    https://docs.aws.amazon.com/waf

                    For the Django backend, prioritize securing settings such as using environment variables for sensitive information, setting DEBUG=False once in production, and configuring proper logging. Implementing Django’s built-in security features like CSRF protection, XSS protection, and SQL injection prevention for all new code added to the backend. For the React frontend, ensure API requests are secured with authentication tokens, use HTTPS for all communications, and sanitize user inputs to prevent XSS attacks. Regularly update your third-party dependencies and audit them for vulnerabilities.

                    Check out the AWS Security Documentation, Django security guides, and the OWASP Top Ten for the most critical web application security risks.

                    https://docs.aws.amazon.com/security

                    https://docs.djangoproject.com/en/5.0/topics/security

                    https://owasp.org/www-project-top-tenhttps://owasp.org/www-project-top-ten

                    The Web Starter frontend does not implement any styling packages and is written in vanilla JS/React. This was intended to make it as simple as possible to integrate any third-party UI packages or other frontend libraries that users can use to speed up development, like Tailwind, Bootstrap, or Material UI.

                    Users are also encouraged to explore the possibility of converting the JavaScript files to Typescript, if preferred.

                    Extending the backend API is simple. The Django REST backend project consists of apps which manage their own routes, models, and serializers. Users can create new apps or modify the existing apps in the project to implement additional backend functionality. For documentation on developing with the Django REST Framework, see the following:

                    https://www.django-rest-framework.org/tutorial/quickstart/

                    This project was designed with the intent to be modular, and users are encouraged to replace the existing backend with another framework of their choice or if there are other tools that better suit the needs of your project.

                    When deploying the Sample Solution through CloudFormation, AWS resources will be deployed which will have different costs associated with them. For the latest up-to-date pricing information about AWS services, see the following resources:

                    SES: https://aws.amazon.com/ses/pricing/

                    EC2: https://aws.amazon.com/ec2/pricing/on-demand/

                    The Sample Solution is intended for trialing the Starter App or different LLMs. If you decide to clone and utilize the Starter App as a base for your project, you can deploy any aspect of the architecture to whichever service you prefer. Different services, whether in AWS or external third-party tools, will have different benefits and downsides that should be considered depending on the type of application you are building, and how you will need to allocate resources.

                    Each LLM will also incur costs depending on the number of input and output tokens used. Input tokens are based on the amount of content provided to the LLM within the prompt. Output tokens are based on the amount of content the LLM responds with. For the latest up-to-date pricing on the LLMs used in the AI Starter App, see the following resources:

                    https://openai.com/api/pricing/

                    https://mistral.ai/technology/#pricing

                    https://www.anthropic.com/api

                    This project and all packages utilized within the project are open-source and available for public use under the MIT license.

                    The source code is available publicly at the following Github repository for any organization that wants to clone the Starter App to use as a base for development: https://github.com/anthonyfierrosoftware/ai-starter

                    Glossary

                    The following terminology, as defined below, may be used throughout this document.

                    Update or replace the list below to reflect the subject matter of your Sample Solution.

                    TermDescription
                    APIApplication Programming Interface