Dan Farrelly talk focuses on strategies for improving API performance and reliability using events and background functions. Dan emphasizes the need for efficient and optimal user experience, highlighting how complex operations and the evolution of API endpoints can cause bloated code, slow application performance, and potential reliability issues.
These potential pitfalls, such as multiple failure points in processes dependent on third-party APIs, can be mitigated by offloading non-critical logic from the API's critical path and moving to asynchronous operations. The implementation of tools like queues facilitates the management of these background jobs and ensures their reliable execution.
Dan also advocates for robust recovery plans, logging for debugging, and the adoption of event-driven architecture for APIs. The latter provides several benefits such as improved decoupling, scalability, and more efficient API designs. This approach supports parallel execution of independent functionalities and facilitates better recovery, debugging, and analysis.
Finally, Dan emphasizes the importance of creating APIs that are fast, reliable, and user-friendly, while also ensuring that extra processing doesn't affect the API's performance or the user's experience.
Share this talk with your friends
Transcript
Well, thank you all. Thanks for the introduction. You know my name? My name's Dan. And I'm here
to talk to you a little bit today about building APIs. Hopefully, you had a little bit of coffee or caffeine after lunch, so I'll try to warm you all back up. All right. So, mainly, I want to focus in on how you can improve the performance and reliability of your APIs with events and
background functions, right? And this talk is aimed at mostly folks that are newer to building APIs, but I think it's still a valuable refresher to anyone who's looking at APIs from first principles perspective. So, let's jump in. Okay. So, let's say you're building an API. The code doesn't necessarily matter right there, so I don't need to squint for whoever's missing
their glasses. So, for the sake of this lightning talk, we're going to keep it real light. We're going to just say this is a user signup endpoint, right? It starts off super simple. It's really clean. It's focused. It just does a couple things. It creates a user and a database, creates a session
and redirects you to the dashboard. This is awesome. Whether it's a signup endpoint or something else, doesn't matter. These things kind of bloat over time. What happens? Fast forward a few different months and your endpoint ends up doing way too much, right? Now, you're starting a trial for
the user. You are sending them a welcome email and you are also adding them to a product newsletter because your marketing team wants it and they need that for onboarding and education and whatnot. Now, your user is paying the tax, right? You're doing a lot of work before a user can perform
their action getting into the dashboard, right? And this can happen. This is a simple example that we all probably have and it could also happen in places like where you're, say, handling file uploads and you're processing that data or maybe you're chaining calls to LLMs and you're handling
the hallucinations that naturally happen there. You're going to be doing a lot. So, what's the impact of this bloat over time, right? And the first, it impacts performance. Things go from fast to slow, all right? And the worst part is that your users end up feeling this pain of the added
complexity of your app that really is not necessary for them to start using your product. So, this hurts user experience, user retention, and overall it's just going to affect business outcomes, right? Like, it sounds boring but it's true. So, this is not good. So, it also affects reliability because now
your endpoint does way too much and it's doing these multiple steps and what happens if a third party API request fails, right? Like, you don't want to prevent the user from signing up if you couldn't send them a welcome email but you also, like, don't want to fail silently and forget to
send them a welcome email. So, you kind of need to handle these different things. Failing the entire endpoint also ends up, like, being a pretty bad experience. So, to make web apps and APIs that are fast and performant, I think you should embrace the asynchronous. So, async on the back end
means background functions or what other folks might call background jobs. And to keep your API fast, the key thing is you need to offload this logic into the background. And you take that logic and you remove it from the critical path of that API endpoint. So, if you're not familiar with the
term critical path, it basically means it's the minimal amount of code to complete the operation that you need. So, you should focus in on that to keep your APIs super fast. So, to doing this, you might need to bring in some new tech. You know, you need to reach for queues, maybe serverless queues,
maybe a tool that provides durable execution or durable workflows. And when this code actually runs in the background, how do you know that it actually ran, right? That's the benefit of the synchronous API endpoint. Like, you get a response. It either failed or it succeeded. The user can take
an action. So, how do you make sure that it runs reliably? And your background functions need to be able to automatically retry. So, when there's errors, blips, you need to make sure when you're doing this, make sure that when you add this into your system, you're retrying automatically.
Especially if you used LLM APIs, you know, with this, things just break, things just don't work. So, you need to make sure you have that. And then, to debug issues in production, you're always going to want to make sure you have logs so you can introspect something later, debug some issues. You're also going to want to probably plan and investigate what it's like to recover from
these issues, right? If you ship a bug into production and a function fails 1,000 times, you're going to want to figure out how to recover from that because we all ship bugs. I do too. So, most folks moving this code to background jobs often consider a message-based approach first,
right? Message queues are very popular, but I'd like to compare that with an event-based approach today. And as we've been all talking about the web a lot today, I think we should all be familiar with events, right? They happen in the browser. There's a button click, et cetera. Something
reacts to it. You have a handler, et cetera. But what about on the backend? What does that look like? So, let's look at a classic approach of using a message queue. And I'm jumping a little bit here because it's the lightning talk. But, you know, in a normal setup, someone might say
they're doing three different things. We're going to have three distinct queues because each should be independent. And then, you're going to send three different messages to these three queues. And then, they're going to be processed by three different workers. Okay. And now,
compare this with an event-based approach. So, you can send a single event that's user-created, and it can trigger three different functions. So, this approach is called fan out. And this is pretty cool as your endpoint can just send a single event and describe what happened. The user
was created. And any function can listen to this event. So, you can remove one of these, add a new function, without having to go back to your API endpoint and push to another queue. So, what are some key benefits of using events? So, let's review. Events can improve the decoupling
so they can describe what happened in your system versus what you want to happen prescriptively. You can also easily fan out. So, like I said, you can add or remove functionality. And these things can run in parallel independently. So, if one fails, it can retry independently.
And events are also facts. Something happened in your system. So, storing events can be really useful in aiding of recovery, of debugging, maybe even data analysis in your app. So, let's just wrap things up a little bit. So, your APIs for good user experience must be fast,
and they must be reliable. And you should aim to keep all of this extra stuff out of the critical path of your API. So, it's performance. You should also move that remaining logic, then,
into background functions or something else that's processing it off the main thread of the API request. And then, to do this well and reliably, you should always set up automatic retries and some sort of recovery mechanism. And lastly, I hope at least I shed a little light on it,
I think you should consider using events to decouple this logic. So, all right. That's all. I hope this inspired you or pushed you a little bit to think maybe about events or about how you can do this in your application. And if you want to come talk to me about this, hit me up. Or if you want to learn about how Ingest can help you with this in your application,
let me know. All right. Thank you.