This is an example of how to set up a LangChain application with a Dockerfile to deploy on Kinsta’s Application Hosting services from a GitHub repository.

The LangChain framework is intended to develop language-model-powered applications that are data-aware, agentic (allow a language model to interact with its environment), and differentiated. More information is available on the LangChain website.

Kinsta automatically installs dependencies defined in your requirements.txt file during the deployment process.

  1. Log in to GitHub and create a new repository from this template (Use this template > Create a new repository): Kinsta – Hello World – LangChain.
  2. In MyKinsta, add an application with the Hello World – LangChain repository.
  3. Log in to OpenAI (create an account if you do not already have one). Go to OpenAI API and generate and copy your API key.
  4. In Environment variables, in Key 1, enter OPENAI_API_KEY, and in Value 1, paste the API key you copied from Open AI.
  5. In the Build environment step, select Use Dockerfile to set up container image. The Dockerfile path and Context can be left blank.

The app is available as soon as the build finishes, and the Kinsta Welcome page loads at your application’s URL.

Kinsta Welcome page after successful installation of LangChain.
Kinsta Welcome page after successful installation of LangChain.

Web Server Setup

Build Environment

When creating your LangChain application, you must choose Use Dockerfile to set up container image in the Build environment step.

Environment Variables

In Environment variables, in Key 1, enter OPENAI_API_KEY, and in Value 1, paste the API key you copied from Open AI. If you use different models (not OpenAI’s), adjust the key and value as needed.