Skip to content

Logo

Frontend

If you have gone over the deployment module you should be at the point where you have a machine learning model running in the cloud. The model can be interacted with by sending HTTP requests to the API endpoint. In general we refer to this as the backend of the application. It is the part of our application that are behind-the-scene that the user does not see and it is not really that user-friendly. Instead we want to create a frontend that the user can interact with in a more user-friendly way. This is what we will be doing in this module.

Another point of splitting our application into a frontend and a backend has to do with scalability. If we have a lot of users interacting with our application, we might want to scale only the backend and not the frontend, because that is the part that will be running our heavy machine learning model. In general dividing a application into smaller pieces are the pattern that is used in microservice architectures.

Image

In monollithic applications everything the user may be requesting of our application is handled by a single process/ container. In microservice architectures the application is split into smaller pieces that can be scaled independently. This also leads to easier maintainability and faster development.

Frontends have for the longest time been created using HTML, CSS and JavaScript. This is still the case, but there are now a lot of frameworks that can help us create a frontend in Python:

In this module we will be looking at streamlit. streamlit is a easy to use framework that allows us to create interactive web applications in Python. It is not at all as powerful as a framework like Django, but it is very easy to get started with and it is very easy to integrate with our machine learning models.

❔ Exercises

In these exercises we go through the process of setting up a backend using fastapi and a frontend using streamlit, containerizing both applications and then deploying them to the cloud. We have already created an example of this which can be found in the samples/frontend_backend folder.

  1. Lets start by creating the backend application in a backend.py file. You can use essentially any backend you want, but we will be using a simple imagenet classifier that we have created in the samples/frontend_backend/backend folder.

    1. Create a new file called backend.py and implement a FastAPI interface with a single /predict endpoint that takes a image as input and returns the predicted class (and probabilities) of the image.

      Solution
      backend.py
      import json
      from contextlib import asynccontextmanager
      
      import anyio
      import torch
      from fastapi import FastAPI, File, HTTPException, UploadFile
      from PIL import Image
      from torchvision import models, transforms
      
      
      @asynccontextmanager
      async def lifespan(app: FastAPI):
          """Context manager to start and stop the lifespan events of the FastAPI application."""
          global model, transform, imagenet_classes
          # Load model
          model = models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V2)
          model.eval()
      
          transform = transforms.Compose(
              [
                  transforms.Resize((224, 224)),
                  transforms.ToTensor(),
                  transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
              ],
          )
      
          async with await anyio.open_file("imagenet-simple-labels.json") as f:
              imagenet_classes = json.load(f)
      
          yield
      
          # Clean up
          del model
          del transform
          del imagenet_classes
      
      
      app = FastAPI(lifespan=lifespan)
      
      
      def predict_image(image_path: str) -> str:
          """Predict image class (or classes) given image path and return the result."""
          img = Image.open(image_path).convert("RGB")
          img = transform(img).unsqueeze(0)
          with torch.no_grad():
              output = model(img)
          _, predicted_idx = torch.max(output, 1)
          return output.softmax(dim=-1), imagenet_classes[predicted_idx.item()]
      
      
      @app.get("/")
      async def root():
          """Root endpoint."""
          return {"message": "Hello from the backend!"}
      
      
      # FastAPI endpoint for image classification
      @app.post("/classify/")
      async def classify_image(file: UploadFile = File(...)):
          """Classify image endpoint."""
          try:
              contents = await file.read()
              async with await anyio.open_file(file.filename, "wb") as f:
                  f.write(contents)
              probabilities, prediction = predict_image(file.filename)
              return {"filename": file.filename, "prediction": prediction, "probabilities": probabilities.tolist()}
          except Exception as e:
              raise HTTPException(status_code=500) from e
      
    2. Run the backend using uvicorn

      uvicorn backend:app --reload
      
    3. Test the backend by sending a request to the /predict endpoint, preferably using curl command

      Solution

      In this example we are sending a request to the /predict endpoint with a file called my_cat.jpg. The response should be "tabby cat" for the solution we have provided.

      curl -X 'POST' \
          'http://127.0.0.1:8000/classify/' \
          -H 'accept: application/json' \
          -H 'Content-Type: multipart/form-data' \
          -F 'file=@my_cat.jpg;type=image/jpeg'
      
    4. Create a requirements_backend.txt file with the dependencies needed for the backend.

      Solution
      requirements_backend.txt
      1
      2
      3
      4
      fastapi>=0.108.0
      uvicorn>=0.25.0
      torch>=2.1.2
      torchvision>=0.16.2
      
    5. Containerize the backend into a file called backend.dockerfile.

      Solution
      backend.dockerfile
      FROM python:3.11-slim
      
      RUN apt update && \
          apt install --no-install-recommends -y build-essential gcc git && \
          apt clean && rm -rf /var/lib/apt/lists/*
      
      RUN mkdir /app
      
      WORKDIR /app
      
      COPY requirements_backend.txt /app/requirements_backend.txt
      COPY backend.py /app/backend.py
      
      RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements_backend.txt
      
      EXPOSE $PORT
      CMD exec unicorn --port $PORT --host 0.0.0.0 backend:app
      
    6. Build the backend image

      docker build -t backend:latest -f backend.dockerfile .
      
    7. Recheck that the backend works by running the image in a container

      docker run --rm -p 8000:8000 -e "PORT=8000" backend
      

      and test that it works by sending a request to the /predict endpoint.

    8. Deploy the backend to Cloud run using the gcloud command

      Solution

      Assuming that we have created an artifact registry called frontend_backend we can deploy the backend to Cloud Run using the following commands:

      docker tag \
          backend:latest \
          <region>-docker.pkg.dev/<project>/frontend-backend/backend:latest
      docker push \
          <region>.pkg.dev/<project>/frontend-backend/backend:latest
      gcloud run deploy backend \
          --image=europe-west1-docker.pkg.dev/<project>/frontend-backend/backend:latest \
          --region=europe-west1 \
          --platform=managed \
      

      where <region> and <project> should be replaced with the appropriate values.

    9. Finally, test that the deployed backend works as expected by sending a request to the /predict endpoint

      Solution

      In this solution we are first extracting the url of the deployed backend and then sending a request to the /predict endpoint.

      export MYENDPOINT=$(gcloud run services describe backend --region=<region> --format="value(status.url)")
      curl -X 'POST' \
          $MYENDPOINT/predict \
          -H 'accept: application/json' \
          -H 'Content-Type: multipart/form-data' \
          -F 'file=@my_cat.jpg;type=image/jpeg'
      
  2. With the backend taken care of lets now write our frontend. Our frontend just needs to be a "nice" interface to our backend. Its main functionality will be to send a request to the backend and display the result. streamlit documentation

    1. Start by installing streamlit

      pip install streamlit
      
    2. Now create a file called frontend.py and implement a streamlit application. You can design it as you want, but we recommend that the following can be done in the frontend:

      1. Have a file uploader that allows the user to upload an image

      2. Display the image that the user uploaded

      3. Have a button that sends the image to the backend and displays the result

      For now just assume that a environment variable called BACKEND is available that contains the URL of the backend. We will in the next step show how to get this URL automatically.

      Solution
      frontend.py
      import os
      
      import pandas as pd
      import requests
      import streamlit as st
      from google.cloud import run_v2
      
      
      def get_backend_url():
          """Get the URL of the backend service."""
          parent = "projects/my-personal-mlops-project/locations/europe-west1"
          client = run_v2.ServicesClient()
          services = client.list_services(parent=parent)
          for service in services:
              if service.name.split("/")[-1] == "production-model":
                  return service.uri
          return os.environ.get("BACKEND", None)
      
      
      def classify_image(image, backend):
          """Send the image to the backend for classification."""
          predict_url = f"{backend}/predict"
          response = requests.post(predict_url, files={"image": image}, timeout=10)
          if response.status_code == 200:
              return response.json()
          return None
      
      
      def main() -> None:
          """Main function of the Streamlit frontend."""
          backend = get_backend_url()
          if backend is None:
              msg = "Backend service not found"
              raise ValueError(msg)
      
          st.title("Image Classification")
      
          uploaded_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"])
      
          if uploaded_file is not None:
              image = uploaded_file.read()
              result = classify_image(image, backend=backend)
      
              if result is not None:
                  prediction = result["prediction"]
                  probabilities = result["probabilities"]
      
                  # show the image and prediction
                  st.image(image, caption="Uploaded Image")
                  st.write("Prediction:", prediction)
      
                  # make a nice bar chart
                  data = {"Class": [f"Class {i}" for i in range(10)], "Probability": probabilities}
                  df = pd.DataFrame(data)
                  df.set_index("Class", inplace=True)
                  st.bar_chart(df, y="Probability")
              else:
                  st.write("Failed to get prediction")
      
      
      if __name__ == "__main__":
          main()
      
    3. We need to make sure that the frontend knows where the backend is located, and we want that to happen automatically so we do not have to hardcode the URL into our frontend. We can do this by using the Python SDK for Google Cloud Run. The following code snippet shows how to get the URL of the backend service or fall back to an environment variable if the service is not found.

      from google.cloud import run_v2
      import streamlit as st
      
      @st.cache_resource  # (1)!
      def get_backend_url():
          """Get the URL of the backend service."""
          parent = "projects/<project>/locations/<region>"
          client = run_v2.ServicesClient()
          services = client.list_services(parent=parent)
          for service in services:
              if service.name.split("/")[-1] == "production-model":
                  return service.uri
          name = os.environ.get("BACKEND", None)
          return name
      
      1. 🙋‍♂️ The st.cache_resource is a decorator that tells streamlit to cache the result of the function. This is useful if the function is expensive to run and we want to avoid running it multiple times.

      Add the above code snippet to the top of your frontend.py file and replace <project> and <region> with the appropriate values. You will need to install pip install google-cloud-run to be able to use the code snippet.

    4. Run the frontend using streamlit

      streamlit run frontend.py
      
    5. Create a requirements_frontend.txt file with the dependencies needed for the frontend.

      Solution
      requirements_frontend.txt
      1
      2
      3
      streamlit>=1.28.2
      pandas>=2.1.3
      google-cloud-run>=0.10.5
      
    6. Containerize the frontend into a file called frontend.dockerfile.

      Solution
      frontend.dockerfile
      FROM python:3.11-slim
      
      RUN apt update && \
          apt install --no-install-recommends -y build-essential gcc git && \
          apt clean && rm -rf /var/lib/apt/lists/*
      
      RUN mkdir /app
      
      WORKDIR /app
      
      COPY requirements_frontend.txt /app/requirements_frontend.txt
      COPY frontend.py /app/frontend.py
      
      RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements_frontend.txt
      
      EXPOSE $PORT
      
      CMD ["streamlit", "run", "frontend.py", "--server.port", "$PORT"]
      
    7. Build the frontend image

      docker build -t frontend:latest -f frontend.dockerfile .
      
    8. Run the frontend image

      docker run --rm -p 8001:8001 -e "PORT=8001" backend
      

      and check in your web browser that the frontend works as expected.

    9. Deploy the frontend to Cloud run using the gcloud command

      Solution

      Assuming that we have created an artifact registry called frontend_backend we can deploy the backend to Cloud Run using the following commands:

      docker tag frontend:latest \
          <region>-docker.pkg.dev/<project>/frontend-backend/frontend:latest
      docker push <region>.pkg.dev/<project>/frontend-backend/frontend:latest
      gcloud run deploy frontend \
          --image=europe-west1-docker.pkg.dev/<project>/frontend-backend/frontend:latest \
          --region=europe-west1 \
          --platform=managed \
      
    10. Test that frontend works as expected by opening the URL of the deployed frontend in your web browser.

  3. (Optional) If you have gotten this far you have successfully created a frontend and a backend and deployed them to the cloud. Finally, it may be worth it to load test your application to see how it performs under load. Write a locust file which is covered in this module and run it against your frontend. Make sure that it can handle the load you expect it to handle.

  4. (Optional) Feel free to experiment further with streamlit and see what you can create. For example, you can try to create a option for the user to upload a video and then display the video with the predicted class overlaid on top of the video.

🧠 Knowledge check

  1. We have created separate requirements files for the frontend and the backend. Why is this a good idea?

    Solution

    This is a good idea because the frontend and the backend may have different dependencies. By having separate requirements files we can make sure that we only install the dependencies that are needed for the specific application. This also has the positive side effect that we can keep the docker images smaller. For example, the frontend does not need the torch library which is huge and only needed for the backend.

This ends the exercises for this module.