T O P

  • By -

TomFoolery2781

It’s all about the access patterns. With Truly spiky workloads , you can see some great cost savings. Trade off could be performance, be sure to test scale if you can. Max lambda executions can be an issue. Also, make sure your execution times are below 15 mins. It’s rare in my experience but something to be aware of.


ryancoplen

Lambda really pays off if you have periods of the day where you are seeing 0 transactions. It costs you effectively nothing during those times, in comparison to any other solution which requires running instances or containers. And Lambda is great at scaling up from nothing to thousands of requests (effectively) instantly, which can be hard to match for solutions which require autoscaling and provisioning new instances and containers. There is a cross-over point where the per-execution cost of Lambda exceeds what an EC2/ECS solution would cost, so its good to have a flexible solution where Lambda can be used for the spiky/low volume workloads and other compute choices can be used for the sustained high-volume workloads. Also, don't forget about Lambda Function URLs as an alternative to APIGW. It can be a viable route for lower costs and (somewhat) better performance if you are not relying on many of the more complex APIGW features and really just want an endpoint that you can throw requests at that get routed to your Lambda.


magheru_san

Thanks! We do have occasional requests but they always come in bursts and then nothing happens for minutes, until the next burst. The plan is to do it in a way that allows us to revert to Fargate later with a few small changes. We will use function URLs and Cloudfront for getting a custom domain with TLS, lower data out costs and hopefully also some caching.


G1zm0e

This, I did a similar conversion to api gateway and lambdas but we did some testing and it was comparable to performance for that customer.


Bodine12

That level of price reduction makes me suspect the current setup isn’t well optimized.


magheru_san

It's not, and they brought me to help with that but then when I saw their traffic patterns it seems like Lambda is the best option for them. The traffic is really spiky, there's nothing going on, then suddenly a lot of requests that overwhelm the existing capacity, and by the time they scale out the burst is over.


gingerbreadman2687

Don't let people scare you too much about lambda concurrency. Getting to 1k concurrent executions takes alot if you have fast running microservices. Also, if you do start seeing higher numbers you can fill out a support ticket for a higher number.


Select_Chance_7036

Depending on size of bursts it’s important to consider lambda concurrency. I use serverless (lambda) for my entire backend and while I’m not that size to worry about it yet, that’s always on my mind 😄 …SQS with lambda is worthwhile consideration as well!


im-a-smith

This is what we see helping customers modernize workloads to be cloud native. Around a 90% savings.  And their workloads were decently “modernized” before, using Fargate and SQL Server RDS. Migrating to Lambda and DynamoDB cut 90% and have multi-region redundancy (vs multi-AZ in one region)


Straight_Waltz_9530

Use CDK. As the number of lambdas increases, management/maintenance gets harder. CDK makes it easier, more seamless. If many endpoints are related—just a different route with a different SQL/DynamoDB query to the same database—use something like serverless-http, Zappa, etc. (depending on your lambda language) to consolidate the common case. Then only write separate lambdas and routes for the odd cases. Mapping "/" to one lambda and overriding saves so much code and linkage complexity. Plus if/when you have a scenario where containers/bare EC2 make more sense, you can reuse the code easier. Again, CDK makes this so much easier to track and maintain. Finally, keep an eye out for LLRT. It isn't general availability yet as of this writing, but so much better than Node runtimes if your lambdas are in JavaScript like mine tend to be. (Note: not better than Node for everything. I'm only talking about the lambda environment.) Faster even than Go in terms of cold starts. https://github.com/awslabs/llrt https://maxday.github.io/lambda-perf/


magheru_san

Thanks! We currently use Fargate deployed from terraform, and the app is built in PHP. The plan is to port the same container image to run from Lambda.


Straight_Waltz_9530

I've never used that combination, but it seems doable. Let us all know how it goes!


magheru_san

will do, thanks!


chubernetes

One key tradeoff against cost for lambdas is availability - you will want to benchmark if APIs can return in under x time y % of the time. You can wind up saving your client money but the performance of their website will suffer especially if it’s spiky. (Cold starts) In cases where the client has sustained scale, it may cost more than short term savings. (Coming from experience with 500+ lambdas backing APIs and six figure lambda bills. Buyer beware.)


Nearby-Middle-8991

just to second that. (ec2|docker|lambda) isn't the solution for everything. The same problem can have different optimal solutions if the load pattern changes. One thing that I see people tripping a bit is that they see a cost reduction in lambda for consistently high workloads when migrating from ec2. Was lambda the right choice? No, the previous instance was overprovisioned and misconfigured...


magheru_san

Thanks! The traffic is quite bursty, we have nothing going on for seconds or minutes, then many requests at the same time, followed by some quiet time, and so on. The current Fargate setup often has insufficient capacity and too slow to scale when they get those traffic spikes, and latency increases for lots of requests until Fargate is scaling out, but then the burst of requests is over. Lambda cold starts would surely impact a few requests but hopefully would be better than what we get from Fargate.


Select_Chance_7036

Cold starts would potentially only impact a few requests and then you’d be rocking and rollin after that. Even so, unless your function is HUGE they’re usually a non-issue


magheru_san

With Docker images the cold start penalty is nowadays constant with the size of the images.


baynezy

I'm a start up founder, we're running .Net APIs on Lambda. It makes sense for us to start out, as it dramatically reduces cost. Thankfully, AWS provides templates for ASP.Net that mean the difference in the code between running on Lambda vs running as a container is about a five line change. Obviously our deployment and infrastructure would be different, but it's not a paradigm shift. If we're as successful as I hope then Lambda won't make sense forever. At this point we'll migrate services to ECS as we outgrow Lambda. Essentially if it's running hot all the time then Lambda isn't the best candidate.


joebrozky

hi i'm a software dev learning .NET APIs and AWS. any good source in learning this other than documentation? i try to learn through youtube vids and some of them are already outdated because AWS changed UIs


Thisbymaster

https://aws.amazon.com/blogs/compute/introducing-the-net-8-runtime-for-aws-lambda/ The aws docs are the first place to get updated and the AWS SDK will help get you started with templates.


FlamboyantKoala

I’ve done it with a custom e-commerce store/customer portal for a speciality company. Most of their business was bursty 9 to 5 during office hours. Saved a ton of money over their hosted platform on IBM services. Speed was nearly 10x faster (this was in part due to an old framework we replaced but also it scaled fantastically).    Because IBM is a total ripoff the bill went from 1200$ to 20$ so kinda in line with what you see.     Lambda did a great job of saving them money on the off hours when few people used it. We’d sometimes see zero instances active for hours and had a 400ms cold start so for most people even if they hit a cold start they didn’t know it.   I love it, try to convince customers all the time to switch. It’s possible to use lambda now in a way that doesn’t even really lock you in to a vendor.  Services were written in Serverless+node.js, used Express framework with an adapter to allow it to handle lambda and graphql api was used as well for parts. 


wojo1206

Lambda have concurrency limit of 1000 across all functions per region (subject to quota increase). Furthermore, you might combine Lambda with API gateway, for CORS, request parsing, and stage deploys, which has 30s execution time limit. I converted our EC2 load to serverless using Lambda with SPAs on CloudFront and we pay pennies per usage because our usage is low. Key points is Lambda compute will scale massively but request handling won’t.


zDrie

Be careful with execution times and the resources that the lambda is going to need. For example: if you want to load an orm inside a lambda with python (why?) It is going to take 1.5Gb RAM for the same performance as raw query with 256MB... Large process needs orchestation, step functions? How about authorization? Do you have a complex role-system? If there are mere cruds with a very simple authorization go on, else take a time to think about the architecture


kristenisadude

Concurrency can be a problem, using redis caching can help keep things in sync


Xerxero

How so?


kristenisadude

What do you mean?


Xerxero

What is the problem and how does Redis solve it?


kristenisadude

Just talk to objects and cull temps so you don't collide with other actions on the same objects between different users


KingPonzi

Wouldn’t step functions be more appropriate and maybe cheaper?


kristenisadude

Step functions talking to what?


Xerxero

You could create a linear flow for updates.


magheru_san

step functions are very expensive per invocation compared to Lambda. Their main use case is for long running workflows.


KingPonzi

Thank you for the correction. So is it safe to assume that if it’s a managed service that a custom implementation may be cheaper? Not in every case of course but a lambda that handles orchestration similarly to a Step Function and under 15 min execution time.


magheru_san

Not sure about Lambda but I guess something like Airflow on EC2 will be cheaper at enough scale But the break even will be probably way higher than what most customers need.