Blog Posts

January 14, 2025

Consuming the Plaid API with React and Go

Contents

Content ID: b5de0dd7-27bf-46ba-a0c2-a71d36268e0c

A Modern Full Stack REST Application

Plaid is a popular API that exposes a user's banking information through REST endpoints. We will make use of their service to create a REST API to create authorization information. We will then create a React application that consumes the API to create auth tokens for a user. Remember, you can click on the View Raw button to copy any code you need. This tutorial assumes familiarity with React and full-stack development. The repo for this can be found here:
A quickstart for setting up Plaid server and React client - qweliant/BetterPlaidQuickstart

To get started, we need to head over to Plaid for our API keys.

Landing page after login to Plaid

Navigate to the Settings tab and click Keys to view your API keys.

Creds from Plaid

We will use the sandbox environment, which uses a different key for testing than development. Notice on the left sidebar a tab that says API. Click on that and you will see the section about Plaid's redirect URI. Click on Add New URI and add http://localhost:80 and http://localhost:3000. This will control where the redirect after linking happens. If you click on the Test in Sandbox button seen on the landing page, you will be taken to a page showing the test user credentials.

Quickstart repos exist for multiple languages. You can use your language of choice for this project since the client is backend agnostic.

We won't necessarily run the exact setup they have, but next, we will walk through setting up the backend via the Go Not So Quickstart Repo.

Backend

  1. Create a project directory for your project. Then run:
git clone https://github.com/plaid/quickstart.git
  1. Delete the Makefile, README, and every folder except the go one.

Before

files present

After

files remaining

These come preconfigured with a Dockerfile to make deployment easy. We won't be covering deployment of Docker containers, but we will be spinning them up using Docker for Desktop. If you need to install, check out Docker for installation instructions per your OS. This will be a simple Docker setup, so you should be able to understand the basics following along.

The next thing we should do is edit the docker-compose.yml file. We need to remove the container setup procedures for the other examples and make environment variables available from a .env file. After the edits, the file should look like this:

version: "3.4"
services:
  go:
    build:
      context: .
      dockerfile: ./go/Dockerfile
    ports:
      - "8000:8000"
    env_file:
      - .env

Running:

docker-compose up -d --build

will build the image and start a container. If we look at the Dockerfile, we can see how our application is being built. The Go version is specified for the official Go image. A directory is created that we copy everything into. After changing the root directory to the Go folder, we get the required packages and build the binary.

Next, we pull the image that our binary will run on, copy the binary into the image, and expose the app via port 8000. By the way, you should delete the lines that copy HTML and static info over. Our Dockerfile should look like this:

FROM golang:1.12 AS build
WORKDIR /opt/src
COPY . .
WORKDIR /opt/src/go
RUN go get -d -v ./...
RUN go build -o quickstart

FROM gcr.io/distroless/base-debian10
COPY --from=build /opt/src/go/quickstart /
EXPOSE 8000
ENTRYPOINT ["/quickstart"]

If you ran the above docker-compose command, you should expect to see warnings about your env variables or errors if you have not already deleted the HTML and static copy. Let's create an .env file in the same folder as the docker-compose file. This file will define your Plaid credentials. The result should be something like this:

# Get your Plaid API keys from the dashboard: https://dashboard.plaid.com/account/keys
PLAID_CLIENT_ID=CLIENT_ID
PLAID_SECRET=SANDBOX_SECRET
PLAID_ENV=sandbox
PLAID_PRODUCTS=transactions
PLAID_COUNTRY_CODES=US,CA
PLAID_REDIRECT_URI=http://localhost:80

The last thing we should do before moving on to the frontend is edit some portions of the code in server.go. This is because the quickstart repo does not behave as one would expect. This may be due to my own ignorance, but following the steps in the official repo leads to an environment error.

Edit the following in server.go:

var (
    PLAID_CLIENT_ID = os.Getenv("PLAID_CLIENT_ID")
    PLAID_SECRET = os.Getenv("PLAID_SECRET")
    PLAID_ENV = os.Getenv("PLAID_ENV")
    PLAID_PRODUCTS = os.Getenv("PLAID_PRODUCTS")
    PLAID_COUNTRY_CODES = os.Getenv("PLAID_COUNTRY_CODES")
    PLAID_REDIRECT_URI = os.Getenv("PLAID_REDIRECT_URI")
    APP_PORT = os.Getenv("APP_PORT")
)

var environments = map[string]plaid.Environment{
    "sandbox":     plaid.Sandbox,
    "development": plaid.Development,
    "production":  plaid.Production,
}

func init() {
    // set defaults if
    if PLAID_PRODUCTS == "" {
        PLAID_PRODUCTS = "transactions"
    }
    if PLAID_COUNTRY_CODES == "" {
        PLAID_COUNTRY_CODES = "US"
    }
    if PLAID_ENV == "" {
        PLAID_ENV = "sandbox"
    }
    if APP_PORT == "" {
        APP_PORT = "8000"
    }
    if PLAID_CLIENT_ID == "" {
        log.Fatal("PLAID_CLIENT_ID is not set. Make sure to fill out the .env file")
    }
    if PLAID_SECRET == "" {
        log.Fatal("PLAID_SECRET is not set. Make sure to fill out the .env file")
    }

    // create Plaid client
    client, err = plaid.NewClient(plaid.ClientOptions{
        PLAID_CLIENT_ID,
        PLAID_SECRET,
        environments[PLAID_ENV],
        &http.Client{},
    })
    if err != nil {
        panic(fmt.Errorf("unexpected error while initializing plaid client %w", err))
    }
}

Now we can proceed with the React setup.

React

Alright, we're back, refreshed, hydrated, and ready for the frontend. Start the app by running:

npx create-react-app plaid

The directory should be in the parent folder of your go folder.

Let's add the client service to the docker-compose file. Add it after the go section. The file will change to look something like this:

version: "3.8"
services:
  go:
    env_file:
      - .env
    build:
      context: .
      dockerfile: ./go/Dockerfile
    ports:
      - "8000:8000"
    restart: on-failure
  client:
    stdin_open: true
    env_file:
      - .env
    build:
      context: .
      dockerfile: ./plaid/Dockerfile
    ports:
      - 80:80
    restart: on-failure

Next, create a Dockerfile in the plaid directory. This Dockerfile defines steps for building the production React app and placing it on an Nginx web server. The Dockerfile for this will be:

# STAGE 1 - build the react app
FROM node:alpine as build
WORKDIR /app
COPY ./package.json /app/
RUN yarn --silent
COPY . /app
RUN yarn build

# STAGE 2 - build the final image using a nginx web server
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Then, create a folder named nginx and add a file named nginx.conf. This file configures the server to serve the static build (HTML, JS, CSS, etc.) on port 80 in the container.

server {
    listen 80;
    location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
        try_files $uri $uri/ /index.html;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

Now, run the docker-compose command to verify that the React app is being hosted on port 80. Since the redirect URI is needed for browser navigation after login, you'll need to resolve the URI for dev vs. container environments.

We now need to add axios and react-plaid to handle state and client/server communication. Inside the plaid directory, run:

yarn add axios react-plaid-link

Then, create a file named Link.js inside the src folder. This is where the logic for making the link will be created.

import React, { useState, useCallback, useEffect } from "react";
import { usePlaidLink } from "react-plaid-link";
import axios from "axios";
import qs from "qs";

const tokenURL = `http://localhost:8000/api/create_link_token`;
const sendTokenURL = `http://localhost:8000/api/set_access_token`;

const Link = () => {
    const [data, setData] = useState("");

    const fetchToken = useCallback(async () => {
        const config = {
            method: "post",
            url: tokenURL,
        };
        const res = await axios(config);
        console.log(res)
        setData(res.data.link_token);
    }, []);

    useEffect(() => {
        fetchToken();
    }, [fetchToken]);

    const onSuccess = useCallback(async (token, metadata) => {
        // send token to server
        const config = {
            method: "post",
            url: sendTokenURL,
            data: qs.stringify({ public_token: token }),
            headers: { "content-type": "application/json" },
        };
        try {
            const response = await axios(config);
            console.log(response)
        } catch (error) {
            console.error(error);
        }
    }, []);

    const config = {
        token: data,
        onSuccess,
    };

    const { open, ready, err } = usePlaidLink(config);

    if (err) return "Error!";

    return (
        <button onClick={() => open()} disabled={!ready}>
            Connect a bank account

    );
}

export default Link;

After running:

docker-compose up

Your app should be running at localhost:80. Try it out and let me know if you encounter any issues!

Congrats! You are now on your way to implementing a fintech banking as a service application. Kinda like...BaaS...Ok ok. Thank you for viewing. Feel free to reach out to me on my social, or connect via linkedin. Have a wonderful day!

Properties

  • Property ID: 1d991476-3640-4bd0-a421-cf5b0cacbb00
    • Boolean Value: false

Exported on 2025-01-14T22:14:53.412Z

January 14, 2025

Deploying Language Models with GCP, FastAPI, Docker, and HuggingFace

I have found using more than 2 models for the API is too large for most deployment procedures. If you know a way around this, let me know.

Initial Set Up

This stack will use FastAPI to serve an endpoint to our model. FastAPI requires uvicorn for serving, and pydantic to handle typing of the request messages. The HuggingFace transformers library specializes in bundling state-of-the-art NLP models in a Python library that can be fine-tuned for many NLP tasks, like Google’s BERT model for named entity recognition or the OpenAI GPT-2 model for text generation.

Using your preferred package manager, install the following:

  • transformers
  • FastAPI
  • uvicorn
  • pydantic

As the packages install, create a folder named app, and add the files nlp.py and main.py to it. In the top level of your directory, add the Dockerfile and the docker-compose.yml file.

After the packages are installed, create a folder named requirements. Add the requirements.txt to the folder. Since I used pipenv to manage the Python environment, I had to run:

pipenv run pip freeze > requirements/requirements.txt

You will need this folder later for building the Docker container. While we are on the topic, be sure you have installed Docker and check to be sure your Docker daemon has started. Link to the setup guide here.

In addition, be sure to install Docker Compose and the Google Cloud SDK. You now have everything needed to proceed to the next step.

The work directory should look similar to this:

app/
  main.py
  nlp.py
requirements/
  requirements.txt
docker-compose.yml
Dockerfile
Pipfile

NLP

HuggingFace makes it easy to implement and serve state-of-the-art transformer models. Using their transformers library, we will implement an API capable of text generation and sentiment analysis. This code has been adapted from their documentation, so we will not dive into the transformer architecture in this article for time’s sake. This also means our models are not fine-tuned for a specific task. Please see my next article on fine-tuning and deploying conversational agents in the future.

With that disclaimer out of the way, let’s look at a snippet of the code responsible for our NLP task:

from transformers import (
    pipeline, GPT2LMHeadModel, GPT2Tokenizer
)

class NLP:
    def __init__(self):
        self.gen_model = GPT2LMHeadModel.from_pretrained('gpt2')
        self.gen_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')

    def generate(self, prompt="The epistemological limit"):
        inputs = self.gen_tokenizer.encode(
            prompt, add_special_tokens=False, return_tensors="pt"
        )
        prompt_length = len(self.gen_tokenizer.decode(
            inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True
        ))
        outputs = self.gen_model.generate(
            inputs, max_length=200, do_sample=True, top_p=0.95, top_k=60
        )
        generated = prompt + self.gen_tokenizer.decode(outputs[0])[prompt_length:]
        return generated

    def sentiments(self, text: str):
        nlp = pipeline("sentiment-analysis")
        result = nlp(text)[0]
        return f"label: {result['label']}, with score: {round(result['score'], 4)}"

This is a very simple class that abstracts the code for text generation and sentiment analysis. The prompt is tokenized, the length of the encoded sequence is captured, and output is generated. We then receive the decoded output and return it as the generated text.

Example output for text generation:

The epistemological limit is very well understood if we accept the notion that all things are equally good. This is not merely an axiom, but an axiomatical reality of propositions...

Sentiment analysis is even simpler due to the pipeline HuggingFace provides:

from nlp import NLP
nlp = NLP()
print(nlp.sentiments("A bee sting is not cool"))
# Output: 'label: NEGATIVE, with score: 0.9998'

API

FastAPI is one of the fastest API frameworks to build and serve requests in Python. It can be scaled and deployed on a Docker image they provide or you can create your own from a Python image. If you have ever written a Flask API, this should not be difficult.

Here’s an example:

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from app.nlp import NLP

class Message(BaseModel):
    input: str
    output: str = None

app = FastAPI()
nlp = NLP()

origins = [
    "http://localhost",
    "http://localhost:3000",
    "http://127.0.0.1:3000"
]

app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["POST"],
    allow_headers=["*"],
)

@app.post("/generative/")
async def generate(message: Message):
    message.output = nlp.generate(prompt=message.input)
    return {"output": message.output}

@app.post("/sentiment/")
async def sentiment_analysis(message: Message):
    message.output = str(nlp.sentiments(message.input))
    return {"output": message.output}

Run the server:

uvicorn app.main:app --reload

Visit http://127.0.0.1:8001/docs to test the API.


Containerization

Edit the Dockerfile as follows:

FROM python:3.7

COPY ./requirements/requirements.txt ./requirements/requirements.txt
RUN pip3 install -r requirements/requirements.txt

COPY ./app /app
RUN useradd -m myuser
USER myuser

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8080"]

Use docker-compose.yml to simplify the process:

version: "3"
services:
  app:
    build: .
    container_name: "nlp_api"
    ports:
      - "8000:8080"
    volumes:
      - ./app/:/app

Run:

docker-compose build
docker-compose up -d

Deployment

To deploy on GCP:

  1. Tag your Docker image:
docker tag nlp_api gcr.io/fast_hug/nlp_api:latest
  1. Push to Google Container Registry:
docker push gcr.io/fast_hug/nlp_api:latest
  1. Deploy via Cloud Run from the GCR dashboard.

Conclusion

In this post, we used HuggingFace’s state-of-the-art NLP models to power a FastAPI service, containerized for scalability. Stay tuned for more on fine-tuning and deploying conversational agents!

January 14, 2001

A Brief and Impassioned Review of Hamilton the Musical

Introduction

Hear me out—I love blackwashing U.S. history. For example, we all know Beethoven was Black. I’m convinced Pushkin only fired on D’anthes after he used the N-word, allegedly. Do I have proof? No. Am I just saying? Absolutely.

The Struggle of Blackwashing History

While blackwashing history can be fun, Hamilton reveals that most of it is wishful thinking. Sure, Pushkin probably didn’t fire on D’anthes for the N-word. But my spite for Europe’s treatment of... well, everything... sometimes ruins my love for blackwashing. For context: my grandmother didn’t pick cotton as a child because of the Ottomans.

Mixed Feelings About Hamilton

Listening to Hamilton, I can’t fully immerse myself in its narrative. At best, Aaron Burr gave life to an abolitionist, allegedly. Meanwhile, Hamilton stood beside a man with slave teeth. How does that work?

I do not want to hear from you one more time, King George. Boo! Go home, Roger. You’ve got teeth to remove from your slaves.

And yet, when Lin-Manuel Miranda says, “My name is Alexander Hamilton,” I feel something. This is the world’s longest-running democracy. Sure, they didn’t explicitly write race and class into the Bill of Rights—technically—but that’s still something.

The example George Washington set by leaving office was revolutionary during an era of Bolívars and Napoleons.

For just a second, when “Raise a glass to freedom” comes on, I feel patriotic. I think:

Is this what white people feel all the time? Like they belong to a historical struggle for freedom rooted in the founding of the U.S.?

The Inevitable Letdown

But then reality sets in:

  • Massive anti-trans legislation
  • Hamilton ruining his career by publicly explaining why he cheated on his wife.
  • Hamilton advising his son to duel and not shoot.

My man, you’re a goof. Big goof. Burr smoking on the Hamilton pack.

Final Thoughts

My favorite thing about Hamilton? Hamilton is a goof, and Burr is a dummy. The musical moves me to set the bar extremely low for large groups of people, which, surprisingly, is cathartic.


PACKWATCH 🚬

RIPBOZO 💀 REST IN PISS 🕊️ YOU WON’T BE MISSED