Django Channels, ASGI, and Heroku

Getting it up and running

The first thing to do was get the service up and running. I started my project on Django 3.0.3 and Channels 2.4.0. The first and most important step is to get things running locally, and this is straightforward.

Deploying to Heroku

The article from 2016 recommends using Daphne. Daphne is a HTTP/websocket protocol server for ASGI, maintained by the Django Project. Though it’s seemingly well-maintained, numerous posts online indicate that it’s not very fast (taking seconds to load a route that would otherwise load almost instantly with gunicorn/WSGI).

  1. uvicorn doesn’t pull from $PORT. You must specify --port $PORT or it won’t boot. I specified --host while I was at it just in case, but it might be required.
  2. uvicorn doesn’t have an equivalent to --max-requests-jitter in gunicorn. --max-requests in gunicorn allows you to restart a worker after some number of requests, and this is available in uvicorn as --limit-max-requests. Without the jitter option, though, you potentially set yourself up for thundering herds of server reboots. That’s not a problem for a project with no traffic (like mine), but I couldn’t for instance deploy it to Pinecast in its current state. This is perhaps a good open source contribution that I could make in the coming weeks.

Making it run

I have been very frugal so far and have stuck around on the Hobby tier to get this project built before I start getting beta users into it. Being on this tier allowed me to catch an issue early: after I logged into the Django Admin panel and started adding data, my server mysteriously started 500ing. Running heroku logs --tail showed an error ending with:

app[web.1]: File "/app/.heroku/python/lib/python3.7/site-packages/django/utils/", line 26, in inner
app[web.1]: return func(*args, **kwargs)
app[web.1]: File "/app/.heroku/python/lib/python3.7/site-packages/django/db/backends/postgresql/", line 185, in get_new_connection
app[web.1]: connection = Database.connect(**conn_params)
app[web.1]: File "/app/.heroku/python/lib/python3.7/site-packages/psycopg2/", line 130, in connect
app[web.1]: conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
app[web.1]: django.db.utils.OperationalError: FATAL: too many connections for role "gihouwrmsatwpc"
>>> import multiprocessing
>>> multiprocessing.cpu_count()


To make the app actually take advantage of the ASGI+Channels setup, I installed Graphene Subscriptions to go along with Graphene Django. With relatively little boilerplate, Graphene Subscriptions allows you to listen to Django signals for model save (create or update) and delete. For instance, the following code allows you to create a GraphQL subscription for changes to any of a Room‘s Messages in a chat application:

import graphene
from import CREATED
from ..models import Message # Our Django model for messages
from .message_type import MessageType # Our Graphene Message type
# Our GraphQL subscription class
class Subscription(graphene.ObjectType):
# Define what the user can subscribe to in our schema
message_created = graphene.Field(
room=graphene.Argument(graphene.ID, required=True),
def resolve_message_created(root, info, **kw):
def filter(event):
return (
event.operation == CREATED and
isinstance(event.instance, Message) and
event.instance.room_id == kw.get('room')
return root.filter(filter).map(lambda e: e.instance)
subscription MessageCreated($room: ID!) {
messageCreated(room: $room) {
sender {
avatar(size: 32)
def resolve_message_created(root, info, **kw):
def filter(event):
return (
event.operation == CREATED and
isinstance(event.instance, Message) and
event.instance.room_id == kw.get('room') and
# nice and safe
return root.filter(filter).map(lambda e: e.instance)

Making it work with Apollo

Getting this up and running with Apollo wasn’t very hard. First, our ApolloClient needs to be updated with a WebsocketLink to our back-end. This teaches Apollo how to connect a web socket to our server. After installing the necessary dependencies, defining the link for this was simple:

const wsLink = new WebSocketLink({
uri: `${window.location.protocol === 'https:' ? 'wss' : 'ws'}://${
options: {
reconnect: true,
import {split} from 'apollo-link';const link = split(
({query}) => {
const definition = getMainDefinition(query);
return (
definition.kind === 'OperationDefinition' &&
definition.operation === 'subscription'
wsLink, // Our new web socket link
httpLink, // Our existing HTTP link
ValueError: Django can only handle ASGI/HTTP connections, not websocket.
diff --git a/yourapp/ b/yourapp/
index eb61b00..19e073e 100644
--- a/yourapp/
+++ b/yourapp/
@@ -9,8 +9,9 @@

import os

-from django.core.asgi import get_asgi_application
+import django
+from channels.routing import get_default_application

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'yourapp.settings')
-application = get_asgi_application()
+application = get_default_application()

Making it work on Heroku

I managed to get the application up and running on Heroku without much fuss. Be sure to make sure you’ve enabled web sockets on Cloudflare if you are using them!

ERROR:    An error occurred while resolving field Mutations.createMessage
Traceback (most recent call last):
File "[...]/site-packages/graphql/execution/", line 452, in resolve_or_error
return executor.execute(resolve_fn, source, info, **args)
[...]File "/app/messaging/schema/mutations/", line 68, in mutate
[...]File "[...]/site-packages/aioredis/", line 322, in execute
raise ConnectionClosedError(msg)
aioredis.errors.ConnectionClosedError: Reader at end of file
  • The problem seems to be resolved by upgrading to the latest Redis channel layer. I was already on the latest version, so this was not the issue.
  • The problem seems to be mostly unique to Heroku (though some folks have experienced it locally), which indicates a configuration issue or bug in Heroku’s Redis implementation.
  • Folks have reported success when switching from Heroku’s first-party Redis offering to Redis Cloud (another Heroku add-on). Some folks reported intermittent issues even after moving to Redis Cloud, though.
$ heroku redis:timeout [your-redis-instance-0000] --seconds=0
Timeout for [your-redis-instance-0000] (REDIS_URL) set to 0 seconds.
Connections to the Redis instance can idle indefinitely.

Making it production-ready

The other part of this process is making sure the application is ready for production users! Having it work is only half of the battle.

$ heroku buildpacks:add -i 1 heroku/redis
web: bin/start-stunnel uvicorn ironcoach.asgi:application --limit-max-requests=1200 --port $PORT
# Remove this:
FOO = os.environ.get('REDIS_URL')
# And do this:
REDIS_URL = os.environ.get('REDIS_URL_STUNNEL') or \
-----> stunnel app detected
-----> Moving the configuration generation script into app/bin
-----> Moving the start-stunnel script into app/bin
-----> stunnel done

Other notes

While setting things up, I did encounter some problems or other hiccups along the way.

graphene-subscriptions only supports one group

I found that graphene-subscriptions was using the same Channels group for each message being broadcast. What this means is that every time a registered model is created, updated, or deleted (the events that you set up signals for), it is broadcast to every other server instance listening for messages.

stunnel crashes on startup

For an unknown reason, stunnel was failing to start when my dynos restarted, with a spooky error:

INTERNAL ERROR: systemd initialization failed at stunnel.c, line 101

What’s next?

I hope you found this useful! If you enjoyed it, please let me know what you’d like to see me write about in the future.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Matt Basta

Matt Basta

A salty software guy. Stripe by day, Pinecast by night.