Skip to content

2025

Notes from a year of working with Jack Audio Connection Kit

Last year I became the lead engineer on a new project. After joining the project I spent a lot of time helping users and engineers debug JACK audio behavior (although JACK typically wasn't the cause of the observed issue). While I had used Core Audio in the past I wasn't as familiar with JACK at the time, so I thought I would write down a few of the things I have learned to check first when somebody tells me they are having an issue with JACK. These are specific to running JACK on Linux, in my case Ubuntu.

flowchart LR
    A[Does JACK server start?] --> B{No}
    B --> C[What is your clock source, and is it valid?]
    C --> D{Does your device provide JACK a clock?}
    D --> E{"Does your device have an internal clock,\n or does it rely on an external clock\n (for instance a PCI MADI card setup)?"}
    C --> F{Are all of your cables connected?}
    F --> G{Are they in the right direction?}
    F --> H{Are they all showing healthy conditions on\n their LED or other connection health indicator?}
    B --> I[Does ALSA recognize your sound card/interface\n as a capture and/or playback device?]
    I --> J{No}
    J --> K{"How is the device connected (USB, PCI, etc),\n and do you need any additional drivers?"}
    J --> L{"Using the correct tools (lsusb, etc)\n does the kernel recognize the device is present?\n If not you're debugging hardware."}
    I --> M{Yes}
    M --> N{Does ALSA recognize that the device supports\n the sample rate you want to run the JACK server at?}
    M --> O{Do you need to customize any channel or other\n settings with alsa mixer to have the correct IO setup?}
    M --> P{Can you use alsa utilities to send or\n receive signal to the device?}
    P --> Q["If your clock source is valid, and ALSA\n recognizes the devices and can route signal in/out of the device\n (depending on playback and capture capabilities and requirements)\n then we probably have a JACK settings issue.\n If you don't verify the clock and ALSA\n settings first though you may be looking in the wrong spot for the issue."]
    C --> R{Clock is valid and ALSA can use the device}
    R --> S{Are you starting JACK in daemon mode or dbus?}
    S --> T{If daemon mode:}
    T --> U{Is there anything blocking the daemon from starting?\n For instance another jack server that is already\n running via pulseaudio or another process?}
    R --> V{Are you passing the right hardware identifier\n to JACK to select the correct playback/capture device\n that the process will use with the server?}
    R --> W{Are you starting the server with a sample rate,\n frame size and period supported by the device?}
    W --> X{Use ALSA tools or qjackctl to review device settings\n and configuration options.}
    R --> Y{What does JACK tell you if you start the process\n with the `--verbose` flag?}

That's a lot to check, and I employ various ALSA cli utilities along with JACK tools like qjackctl to help collect information as I iterate through the items above until I get to a state where the JACK server will start. Once I get the JACK server started if errors are reported it tends to fall into a couple categories (I'm sure there are more, this is just based on my experience so far):

  1. JACK client issues
    1. This is a pretty broad space since really a jack client can represent just about any application/algorithm that you want to wrap as a jack client and have audio or midi in/out port support. A few things I've seen:
      1. The application runs at a fixed rate that isn't compatible with the rate the server is running at.
        1. You will either need to update the client, run the server at a different rate, or do sample rate conversion.
      2. The application parses JACK server output information and doesn't behave correctly when the server runs in verbose mode.
        1. There is probably a better way to get the information the client needs without parsing jack server output.
      3. The MIDI messages that the application receives are not parsed correctly.
        1. Use MIDI utilities to check the message format/structure and either fix the sending or receiving application code.
      4. Resource errors - these tend to be addressable with standard tools (writing tests, profiling the application, static analysis, etc).
        1. Memory errors.
        2. Callback code doesn't execute within the allotted time.
        3. Improper port management.
        4. Logic bugs.
  2. Unstable clock
    1. This one is a little tricky. If the jack server starts, but loses the clock source you can check a few things.
      1. Is the device still connected correctly, and if it's external did it lose power?
      2. Check syslogs and dbus. Did the hardware disconnect and reconnect?
      3. If the device clock is external does the device have any logs that you should review?
      4. If none of the points above expose any useful information then you will probably need to do deeper monitoring, and possibly troubleshooting at the driver/device level.
        • Does the error happens on a set time interval
        • Can you replicate it on a second machine
        • What debug information is available at the driver level

Working with and building JACK applications has been a really interesting experience. Making sure everything aligns as expected from hardware through software has taught me a lot. Maybe some of this will help somebody else if they are debugging JACK workflows for the first time.

My litmus test for a healthy system

A friend once told me that a complicated system that works is made up of many simple systems that work. It's something that rings true to me and I think about every time I interact with a digital system.

To help build those simple system that we compose into complex systems I want to share a few items that help me measure how healthy a simple system is, or isn't.

Testing

This is my first step these days. Show me your test. If they are not there, then that tells me most of what I need to know about the state of the project. If they are there then I have a natural starting point for onboarding, and understanding. I have a way to execute and inspect at least parts of the system, and depending on the scope and levels of the test maybe the whole system.

It may be over stating it, but testing to me is one of the best skills a software engineer can develop, and the presence/scope or lack of automated test give me most of the information I need to know when considering the health of a project.

I think it's hard to over state how much you can learn from a good test suite, and how fast you can go with a robust reliable test suite. Over the years I've had many engineers ask me how they can know wether or not they are using the right abstractions and designs. I struggled to answer this for a long time (often recommending various books). These days I tell them to write the test. If it's hard to test, or the test sucks to write, then you're probably not on the right track with your design. Step back and reconsider your approach and what other options exist.

Publishing

What good is writing code if you can't ship it. Ultimately we are building these things to do something in the physical world right? Maybe there isn't a tangible change to the physical world, but somebody is interacting with or getting some form of utility from this thing we are building right? When we make changes we need a way to reliably ship those changes to their destination host system to run.

There are many ways to package up software and get it to a host system today (containers, debs, msi, app store artifacts, etc). Pick the one that makes the most sense for your project, use tools that help generate the target artifact (cpack, buildx, etc) and then build a publishing process (anything from a script that makes this a reproducible process to a fully automated pipeline) that makes this something that is easy to do (with appropriate permissions) once a change has been signed off on.

When publishing is hard I have found that it discourages work from getting finished. A lot of WIP builds up, or everything just takes longer. It's harder to keep up with what is or isn't done. The more you reduce the time between somebody deciding to implement a change, having it reviewed and then shipping it the better. This also helps make it easier to identify and fix bugs, because let's face it there will be bugs, there will be unknowns, the faster we can address them the better.

Docs

There are multiple forms of docs that a project might generate. There are two primary forms of documentation that I'm interested in while working on a project.

Decision Records

README and other developer guides are nice for helping me setup an environment, but if those don't exist hopefully I can get the information I need from pipelines, scripts and other files. What I will never be able to gain context on without docs (or word of mouth from project elders that have been around long enough to know, and have an accurate recollection) is why the project exist in it's current state. That's something Architecture Decision Records or similar documents can help with. Why were certain decisions made (what problem was being solved), what options were considered, and why did the team go with the solution that I'm looking at today.

User Docs

User docs help me understand what somebody using the system can expect. What have we communicated to our users that this tool/application/system can do. What guidance have we provided to them? How have we encouraged users to engage with our team and project if they have questions, need support, etc. The state of user docs conveys context about our system and how much we care about the people using our system.

Wrap up

I thought about including other items on this list. Abstractions, data structures, dependency management, build systems, etc but honestly I think that those roll up into the three points above. If you're able to publish your project reliably then you probably have build systems and dependency management covered. If you have a well orchestrated and reliable test harness then you likely have healthy(ish) abstractions, though maybe this doesn't extend to choosing the "best" data structures. And while you could have a healthy publishing and test system without docs I think that only works at a small scale. Ultimately we need to communicate across time, space and individuals and docs are a great way to do that, and there is not reason they can't be part of your publishing system so that your docs live with the project, and as the system evolves your docs evolve with it.

See also 12 steps to better code.